可解释性
深度学习
人工智能
计算机科学
机器学习
卷积神经网络
相关性(法律)
水准点(测量)
故障检测与隔离
领域(数学分析)
人工神经网络
领域知识
异常检测
断层(地质)
深层神经网络
数据挖掘
根本原因分析
可视化
特征学习
根本原因
特征提取
数据建模
作者
Ahmet Yılmaz Syarif,Elif Demir,Mehmet Kaya,Ahmet Yılmaz Syarif,Elif Demir,Mehmet Kaya
标识
DOI:10.63876/ijss.v1i2.74
摘要
The integration of deep learning into industrial fault detection systems has significantly enhanced predictive accuracy and operational efficiency. However, the lack of model interpretability poses a critical barrier to its widespread adoption in safety-critical environments. This study proposes an interpretable deep learning framework that combines Convolutional Neural Networks (CNNs) with attention mechanisms and Layer-wise Relevance Propagation (LRP) to enable transparent fault diagnosis in complex machinery. Using a benchmark dataset from a rotating machinery system, the model achieves high classification performance while providing intuitive visual and quantitative explanations for its predictions. The attention module highlights critical temporal and spatial features, while LRP decomposes prediction scores to reveal feature-level contributions. Experimental results demonstrate that the proposed model not only maintains high accuracy (above 95%) but also delivers interpretable outputs that align with domain expert reasoning. Additionally, the model supports root cause analysis and facilitates trust in automated systems, which is essential for industrial stakeholders. This research bridges the gap between black-box deep learning models and real-world industrial applications by promoting transparency, accountability, and actionable insights. The proposed framework serves as a practical step toward deploying explainable AI in industrial settings, supporting both real-time monitoring and decision-making processes.
科研通智能强力驱动
Strongly Powered by AbleSci AI