可解释性
清晰
计算机科学
领域(数学)
数据科学
过程(计算)
人工智能
管理科学
风险分析(工程)
系统回顾
领域(数学分析)
机器学习
工程类
政治学
业务
数学分析
生物化学
化学
数学
梅德林
法学
纯数学
操作系统
作者
Ying Peng,Haidong Shao,Yan Shen,Jie Wang,Yiming Xiao,Bin Liu
标识
DOI:10.1088/1361-6501/ad99f4
摘要
Abstract Recent years have witnessed a surge in the development of intelligent fault diagnosis (IFD) mostly based on deep learning methods, offering increasingly accurate and autonomous solutions. However, they overlook the interpretability of models, and most models are black-box models with unclear internal mechanisms, thereby reducing users’ confidence in the decision-making process. This is particularly problematic for critical decisions, as a lack of clarity regarding the diagnostic rationale poses substantial risks. To address these challenges, a more reliable, transparent, and interpretable system is urgently demanded. Research on the interpretability of IFD has gained momentum and stands today as a vibrant area of study. To promote in-depth research and advance the development of this field, a thorough examination of existing journal articles on interpretable fault diagnosis models is essential. Such a review will demystify current technologies for readers and provide a foundation for future investigation. This article aims to give a systematic review of the state-of-the-art interpretability research in the field of IFD. We present a systematic review of recent scholarly work on interpretable models in this domain, categorizing them according to their methodologies and structural attributes. In addition, we discuss the challenges and future research directions for the interpretability of IFD.
科研通智能强力驱动
Strongly Powered by AbleSci AI