感知
价值(数学)
计算机科学
心理学
数据科学
互联网隐私
人机交互
情报检索
机器学习
神经科学
作者
Xianzhe Fan,Qing Xiao,Xuhui Zhou,Jiaxin Pei,Maarten Sap,Zhicong Lu,Hong Shen
出处
期刊:Cornell University - arXiv
日期:2024-09-01
被引量:2
标识
DOI:10.48550/arxiv.2409.00862
摘要
Large language model-based AI companions are increasingly viewed by users as friends or romantic partners, leading to deep emotional bonds. However, they can generate biased, discriminatory, and harmful outputs. Recently, users are taking the initiative to address these harms and re-align AI companions. We introduce the concept of user-driven value alignment, where users actively identify, challenge, and attempt to correct AI outputs they perceive as harmful, aiming to guide the AI to better align with their values. We analyzed 77 social media posts about discriminatory AI statements and conducted semi-structured interviews with 20 experienced users. Our analysis revealed six common types of discriminatory statements perceived by users, how users make sense of those AI behaviors, and seven user-driven alignment strategies, such as gentle persuasion and anger expression. We discuss implications for supporting user-driven value alignment in future AI systems, where users and their communities have greater agency.
科研通智能强力驱动
Strongly Powered by AbleSci AI