可解释性
计算机科学
核电站
人工神经网络
断层(地质)
核能
功率(物理)
可靠性工程
人工智能
机器学习
数据挖掘
核物理学
地质学
物理
量子力学
地震学
工程类
作者
Jie Liu,Qian Zhang,Rafael Macián‐Juan
标识
DOI:10.1016/j.pnucene.2024.105287
摘要
Deep neural networks, as applied in the field of nuclear power fault diagnosis, have garnered significant attention alongside advancements in artificial intelligence technology. However, the "black box" nature of deep learning models has raised concerns regarding their deployment in scenarios demanding high safety standards, such as nuclear power plants. In this paper, we innovatively propose the utilization of an explainable artificial intelligence method, grounded in game theory, to conduct a detailed analysis of the diagnostic behavior of neural network models applied to nuclear power plants. By leveraging SHAP, the decision-making processes of these opaque models are demystified, offering insights into how and why they arrive at their predictions. In this study, two of the most widely utilized neural network frameworks are applied across six representative fault diagnosis cases for a comprehensive analysis. Based on its analysis results, a novel SHAP-Enhanced Feature Selection for Efficient Neural Network Fault Diagnosis strategy is proposed, significantly reducing model complexity without sacrificing diagnostic performance. This research breaks through the limitations of data-driven models used in the field of nuclear power fault diagnosis, conducts an interpretable analysis of their behavior, and proposes an improved strategy based on the analysis results, contributing to the practical application of data-driven models in nuclear power.
科研通智能强力驱动
Strongly Powered by AbleSci AI