建议(编程)
公共关系
心理学
业务
政治学
计算机科学
程序设计语言
作者
Manhui Jin,Zhiyong Yang,Traci L. Freling,Narayanan Janakiraman
标识
DOI:10.1177/07439156251320314
摘要
Many policy makers and governmental organizations have started using generative artificial intelligence (AI) to provide advice to individuals. However, prior research paints an unclear picture of individuals’ receptiveness to the outputs generated by AI, relative to those from human advisors. While some studies (e.g., Longoni, Bonezzi, & Morewedge, 2019) show that individuals prefer outputs generated by humans over AI, others (e.g., Logg, Minson, & Moore, 2019) present an opposite pattern. To reconcile these mixed findings, this research differentiates two perspectives where relative preferences have been widely examined: (1) a bystander perspective, where consumers evaluate the content generated by human versus AI agents, and (2) a decision-maker perspective, where consumers accept recommendations made by the agents. We find that although there is a general trend of preferring human advice over AI advice in individual decision-making—exhibiting a “human superiority effect”—there is no significant difference between human and AI content preferences during bystander evaluations. Additionally, psychological distance constitutes an important contextual moderator explaining the relative preference for human versus AI recommendations. Specifically, when decision-making circumstances are perceived to be psychologically distant (e.g., low personal relevance), the human superiority effect is attenuated. Theoretical contributions are discussed, along with practical implications for businesses and governmental organizations.
科研通智能强力驱动
Strongly Powered by AbleSci AI