计算机科学
杠杆(统计)
黑匣子
人工智能
过程(计算)
对抗制
人工智能应用
数据科学
操作系统
作者
Aleksandre Asatiani,Pekka Malo,Per Rådberg Nagbøl,Esko Penttinen,Tapani Rinta-Kahila,Antti Salovaara
出处
期刊:Mis Quarterly Executive
[Indiana University Press]
日期:2020-12-01
卷期号:: 259-278
被引量:66
摘要
Organizations Need to Be Able to Explain the Behavior of Black-Box AI Systems 12Huge increases in computing capacity and data volumes have spurred the development of applications that use artificial intelligence (AI), a technology that is being implemented for increasingly complex tasks, from playing Go to screening for cancer.Private and public businesses and organizations are deploying AI applications to process vast quantities of data and support decision making.These applications can help to reduce the costs of providing various services, deliver new services and improve the safety and reliability of operations.However, unlike conventional information systems, the algorithms embedded in AI applications can be "black boxes."Previously, those who developed applications could completely explain how an algorithm worked.Given an input, they could tell you what the output would be and why, because the systems applied human-made rules.That is no longer true for AI-based applications.The application creates internal structures that determine outputs, but these are inscrutable to outside observers, and even the programmers cannot tell you why a specific output was generated.Many AI systems leverage machine learning, 1 Hind Benbya is the accepting senior editor for this article.2 The authors thank Hind Benbya and the members of the review team for their insightful feedback that has greatly improved the
科研通智能强力驱动
Strongly Powered by AbleSci AI