期刊:Management Science [Institute for Operations Research and the Management Sciences] 日期:2025-08-12被引量:2
标识
DOI:10.1287/mnsc.2022.02860
摘要
When organizations adopt artificial intelligence (AI) to recognize individuals’ negative emotions and accordingly allocate limited resources, strategic users are incentivized to game the system by misrepresenting their emotions. The value of AI in automating such emotion-driven allocation may be undermined by gaming behavior, algorithmic noise in emotion detection, and the spillover effect of negative emotions. We develop a game-theoretical model to understand emotion AI adoption, particularly in customer care, and analyze the design of the associated allocation policies. We find that adopting emotion AI is valuable if the spillover effect of negative emotions is negligible compared with resource misallocation loss, regardless of algorithmic noise and gaming behavior. We also quantify the welfare impacts of emotion AI on the users, organization, and society. Notably, a stronger AI is not always socially desirable and regulation on emotion-driven allocation is needed. Finally, we characterize conditions under which leveraging the AI system is preferred to hiring human employees in emotion-driven allocation. We also explore the alternative application of using emotion AI to monitor strategic employees and compare it with hiring a human manager for monitoring. Intriguingly, algorithmic noise may increase the profit of AI monitoring. Our work provides implications for designing, adopting, and regulating emotion AI. This paper was accepted by D. J. Wu for the Special Issue on the Human-Algorithm Connection. Funding: The work of L. Jia was supported by the National Natural Science Foundation of China [Grant 72172013]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.02860 .