证人
动作(物理)
心理学
互联网隐私
社会心理学
个人可识别信息
日常生活
道德
计算机安全
政治学
法学
计算机科学
物理
量子力学
作者
Daniel B. Shank,Alexander Gott
标识
DOI:10.1080/10447318.2020.1768674
摘要
Do people personally witness artificial intelligence (AI) committing moral wrongs? If so, what kinds of moral wrong and what situations produce these? To address these questions, respondents selected one of six prompt questions, each based on a moral foundation violation, asking about a personally-witnessed interaction with an AI resulting in a moral victim (victim prompts) or where the AI seemed to engage in immoral actions (action prompt). Respondents then answered their selected question in an open-ended response. In conjunction with liberty/privacy and purity moral foundations and across both victim and action prompts, respondents most frequently reported moral violations as two types of exposure by AIs: their personal information being exposed (31%) and people's exposure to undesirable content (20%). AIs expose people's personal information to their colleagues, close relations, and online due to information sharing across devices, people in proximity of audio devices, and simple accidents. AIs expose people, often children, to undesirable content such as nudity, pornography, violence, and profanity due to their proximity to audio devices and to seemly purposeful action. We argue that the prominence in reporting these types of exposure may be due to their frequent occurrence on personal and home devices. This suggests that research on AI ethics should not only focus on the prototypically harmful moral dilemmas (e.g., autonomous vehicle deciding whom to sacrifice) but everyday interactions with personal technology.
科研通智能强力驱动
Strongly Powered by AbleSci AI