可解释性
计算机科学
异常检测
残余物
数据挖掘
断层(地质)
人工智能
机器学习
算法
地震学
地质学
作者
Yukun Fang,Haigen Min,Xia Wu,Xiaoping Lei,Shixiang Chen,Rui Teixeira,Xiangmo Zhao
标识
DOI:10.1109/jsen.2023.3236838
摘要
To guarantee the safety and reliability of autonomous driving applications, it is indispensable to construct a proper fault diagnosis framework tailored to autonomous vehicles. Fault diagnosis aims to provide essential information about the system operational status and its interpretation facilitates decision-making and mitigates the potential operation risks. In the present work, interpretability issue in fault diagnosis for autonomous vehicles is discussed from the sensor data analytics perspective. Environmental impact is first evaluated using the noise energy as a measure to interpret the impact on sensor data caused by the environment. A signal quality indicator is proposed and Savitzky–Golay filters are applied for online denoising, acting as a countermeasure to mitigate the impact and to enhance the data quality. Then, the adversarial learned denoising shrinkage autoencoder (ALDSAE), an adversarial learning neural network, is constructed for sensor data anomaly detection. It considers an adversarial training technique to improve the performance of the anomaly detector. A residual explainer specific to the ALDSAE model is employed to calculate the contribution of each input feature to the anomaly score in order to interpret the anomaly detection results. Several experiments with the collected data from an autonomous vehicle in a real test field are implemented to validate the effectiveness of the proposed approaches. Results show that the area under the ROC curve (AUC_ROC) of the proposed ALDSAE is over 20% higher in average than several traditional anomaly detectors, and the mean explanation accuracy of the residual explainer achieves similar performance with the widely employed kernel Shapley additive explanation (SHAP) with more than 99% reduction in mean response time.
科研通智能强力驱动
Strongly Powered by AbleSci AI