计算机科学
性别偏见
种族偏见
心理学
社会学
社会心理学
性别研究
种族(生物学)
作者
Jieli Liu,Haining Wang
摘要
ABSTRACT To examine whether integrating large language models (LLMs) into library reference services can provide equitable services to users regardless of gender and race, we simulated interactions using names indicative of gender and race to evaluate biases across three different sizes of the Llama 2 model. Tentative results indicated that gender test accuracy (54.9%) and racial bias test accuracy (28.5%) are approximately at chance level, suggesting LLM‐powered reference services can provide equitable services. However, word frequency analysis showed some slight differences in language use across gender and race groups. Model size analysis showed that biases did not decrease as model size increased. These tentative results highlight a positive outlook on integrating LLMs into reference services, while underscoring the need for cautious AI integration and ongoing bias monitoring.
科研通智能强力驱动
Strongly Powered by AbleSci AI