可解释性
稳健性(进化)
航空航天
特征工程
计算机科学
蓝图
机器学习
人工智能
系统工程
工程类
航空航天工程
化学
生物化学
基因
机械工程
深度学习
作者
Yazan Alomari,Mátyás Andó
标识
DOI:10.1016/j.rineng.2024.101834
摘要
This research addresses a critical challenge in aerospace engineering: enhancing the interpretability of machine learning models for predictive maintenance. By integrating SHapley Additive exPlanations (SHAP), our approach decodes the relative importance of sensor-derived features, providing an analytical foundation for understanding engine degradation signals. We delve into the temporal shifts in feature relevance, unveiling the variable impact of sensor data over operational cycles. A rigorous assessment of SHAP's robustness further strengthens the reliability of our interpretive models in the face of data perturbations. Additionally, our nuanced analysis of feature interplay offers a comprehensive view of the factors influencing engine performance predictions. These methodological advancements equip engineers with precise, actionable insights for preemptive maintenance scheduling, directly contributing to the enhancement of aircraft safety and efficiency. The study's implications extend beyond theoretical analysis, offering a pragmatic blueprint for the application of SHAP in the ongoing pursuit of model transparency and maintenance optimization in the aerospace sector.
科研通智能强力驱动
Strongly Powered by AbleSci AI