计算机科学
分类学(生物学)
过程(计算)
软件
人工智能
一套
芯(光纤)
软件部署
数据科学
软件工程
程序设计语言
植物
电信
生物
历史
考古
作者
Rudresh Dwivedi,Devam Dave,Het Naik,Smiti Singhal,Omer Rana,Pankesh Patel,Bin Qian,Zhenyu Wen,Tejal Shah,Graham Morgan,Rajiv Ranjan
摘要
As our dependence on intelligent machines continues to grow, so does the demand for more transparent and interpretable models. In addition, the ability to explain the model generally is now the gold standard for building trust and deployment of artificial intelligence systems in critical domains. Explainable artificial intelligence (XAI) aims to provide a suite of machine learning techniques that enable human users to understand, appropriately trust, and produce more explainable models. Selecting an appropriate approach for building an XAI-enabled application requires a clear understanding of the core ideas within XAI and the associated programming frameworks. We survey state-of-the-art programming techniques for XAI and present the different phases of XAI in a typical machine learning development process. We classify the various XAI approaches and, using this taxonomy, discuss the key differences among the existing XAI techniques. Furthermore, concrete examples are used to describe these techniques that are mapped to programming frameworks and software toolkits. It is the intention that this survey will help stakeholders in selecting the appropriate approaches, programming frameworks, and software toolkits by comparing them through the lens of the presented taxonomy.
科研通智能强力驱动
Strongly Powered by AbleSci AI