透明度(行为)
观点
问责
伦理问题
计算机科学
工程伦理学
质量(理念)
知识管理
管理科学
计算机安全
政治学
工程类
法学
艺术
哲学
认识论
视觉艺术
作者
Nagadivya Balasubramaniam,Marjo Kauppinen,Sari Kujala,Kari Hiekkanen
标识
DOI:10.1007/978-3-030-64148-1_21
摘要
Artificial intelligence (AI) has become a fast-growing trend. Increasingly, organizations are interested in developing AI systems, but many of them have realized that the use of AI technologies can raise ethical questions. The goal of this study was to analyze what kind of ethical guidelines companies have for solving potential ethical issues of AI and developing AI systems. This paper presents the results of the case study conducted in three companies. The ethical guidelines defined by the case companies focused on solving potential ethical issues, such as accountability, explainability, fairness, privacy, and transparency. To analyze different viewpoints on critical ethical issues, two of the companies recommended using multi-disciplinary development teams. The companies also considered defining the purposes of their AI systems and analyzing their impacts to be important practices. Based on the results of the study, we suggest that organizations develop and use ethical guidelines to prioritize critical quality requirements of AI. The results also indicate that transparency, explainability, fairness, and privacy can be critical quality requirements of AI systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI