生成语法
语言学
自然语言处理
计算机科学
人工智能
哲学
作者
Julien Boelaert,Étienne Ollion,Samuel Coavoux,Ivaylo D. Petev,Patrick Präg
标识
DOI:10.31235/osf.io/r2pnb_v2
摘要
Generative AI is increasingly presented as a potential substitute for humans, including as human research subjects in various disciplines. Yet there is no scientific consensus on how closely these in-silico clones could represent their human counterparts. While some defend the use of these “synthetic users,” others point towards the biases in the responses provided by the LLMs. Through an experiment using survey questionnaires, we demonstrate that these latter critics are right to be wary of using generative AI to emulate respondents, but probably not for the right reason. Our results i) confirm that to date, models cannot replace research subjects for opinion or attitudinal research; ii) that they display a strong bias on each question (reaching only a small region of social space); and iii) that this bias varies randomly from one question to the other (reaching a different region every time). Besides the two existing competing theses (“representativity” and “social bias”), we propose a third one, which we call call “machine bias”. We detail this term and explore its consequences, for LLM research but also for studies on social biases.
科研通智能强力驱动
Strongly Powered by AbleSci AI