Large Language Models Are Biased Because They Are Large Language Models
计算机科学
语言模型
自然语言处理
语言学
哲学
作者
Philip Resnik
出处
期刊:Computational Linguistics [Association for Computational Linguistics] 日期:2025-03-28卷期号:: 1-21被引量:5
标识
DOI:10.1162/coli_a_00558
摘要
Abstract This position paper’s primary goal is to provoke thoughtful discussion about the relationship between bias and fundamental properties of large language models. I do this by seeking to convince the reader that harmful biases are an inevitable consequence arising from the design of any large language model as LLMs are currently formulated. To the extent that this is true, it suggests that the problem of harmful bias cannot be properly addressed without a serious reconsideration of AI driven by LLMs, going back to the foundational assumptions underlying their design.