可解释性
人工智能
计算机科学
深度学习
自编码
机器学习
人工神经网络
断层(地质)
过程(计算)
故障检测与隔离
可靠性(半导体)
无监督学习
数据挖掘
地质学
物理
功率(物理)
地震学
执行机构
操作系统
量子力学
作者
Kyojin Jang,Karl Ezra Pilario,Nayoung Lee,Il Moon,Jonggeol Na
标识
DOI:10.1109/tii.2023.3240601
摘要
Process monitoring is important for ensuring operational reliability and preventing occupational accidents. In recent years, data-driven methods such as machine learning and deep learning have been preferred for fault detection and diagnosis. In particular, unsupervised learning algorithms, such as auto-encoders, exhibit good detection performance, even for unlabeled data from complex processes. However, decisions generated from deep-neural-network-based models are difficult to interpret and cannot provide explanatory insight to users. We address this issue by proposing a new fault diagnosis method using explainable artificial intelligence to break the traditional trade-off between the accuracy and interpretability of deep learning model. First, an adversarial auto-encoder model for fault detection is built and then interpreted through the integration of Shapley additive explanations (SHAP) with a combined monitoring index. Using SHAP values, a diagnosis is conducted by allocating credit for detected faults, deviations from a normal state, among its input variables. The proposed diagnosis method can consider not only reconstruction space but also latent space unlike conventional method, which evaluate only reconstruction error. The proposed method was applied to two chemical process systems and compared with conventional diagnosis methods. The results highlight that the proposed method achieves the exact fault diagnosis for single and multiple faults and, also, distinguishes the global pattern of various fault types.
科研通智能强力驱动
Strongly Powered by AbleSci AI