可解释性
计算机科学
人工神经网络
断层(地质)
人工智能
机器学习
网络体系结构
可视化
数据挖掘
计算机安全
地质学
地震学
作者
Botao An,Shibin Wang,Zhibin Zhao,Fuhua Qin,Ruqiang Yan,Xuefeng Chen
标识
DOI:10.1109/tim.2022.3188058
摘要
Artificial neural network (ANN) has achieved great success in mechanical fault diagnosis and has been widely used. However, traditional ANN is still opaque in terms of interpretability, making it difficult for users to understand and trust the diagnosis results. This paper proposes an interpretable neural network to provide high-performance and credible mechanical fault diagnosis results. The proposed network is mainly generated by unrolling the nested iterative soft thresholding algorithm (NISTA) for a sparse coding model and it is named NISTA-Net. Therefore, the network architecture of NISTA-Net has a clear theoretical basis and users know how it is designed. Additionally, we propose a visualization method for NISTA-Net to examine whether the network has learned meaningful features. This method helps users better understand how NISTA-Net performs classifications. These two aspects of transparency/interpretability allow NISTA-Net to be more credible when applied for mechanical fault diagnosis. We carried out simulations and two experiments of fault diagnosis to verify the performance of NISTA-Net. The results reveal that NISTA-Net can well extract the fault features of the concerned bearings and gears. As a consequence, it achieves the best performance compared with other advanced networks. Given the success of NISTA-Net, a systematic approach is finally summarized to help design interpretable fault diagnosis networks, aiming to inspire more related research.
科研通智能强力驱动
Strongly Powered by AbleSci AI