商业道德
生活质量研究
算法
社会学
经济
业务
营销
计算机科学
管理
医学
公共卫生
护理部
作者
Piotr Gaczek,Grzegorz Leszczyński,Yuling Wei,H. Sun
标识
DOI:10.1007/s10551-025-06083-w
摘要
Abstract Integrating Artificial Intelligence (AI) into managerial decision-making raises significant ethical concerns, particularly regarding the attribution of responsibility and decision-making in moral dilemmas. This study examines how different forms of human–AI collaboration influence responsibility and managers’ behavior in ethically questionable scenarios. Across three hypothetical vignette experiments involving 587 marketing managers, we investigate the effects of AI recommendation systems, as opposed to background automation or natural language processing, on ethical decision-making. Results suggest that working solely with AI recommendations may increase perceived personal responsibility, discouraging unethical actions (Study 1). In contrast, collaboration with human and AI teams tends to diffuse responsibility and increase the likelihood of unethical behavior. The severity of ethical violations further shapes these effects. For moderate violations, responses vary by collaboration type, whereas for severe violations, heightened moral clarity supersedes these differences (Study 2). Study 3 shows that benefit-focused AI communication poses particular risks. When AI highlights potential benefits, managers report lower personal responsibility and greater willingness to engage in unethical actions, such as using sensitive customer data. This effect is most potent among managers who heavily rely on AI in their daily work. These findings highlight the dual impact of AI recommendation systems in managerial contexts. While such systems have the potential to enhance accountability, benefit-oriented framing and overreliance on AI may undermine the ethical standards. The study highlights the importance of establishing clear accountability frameworks and ethical guidelines when implementing AI in high-stakes decision-making.
科研通智能强力驱动
Strongly Powered by AbleSci AI