预言
计算机科学
强化学习
人工智能
均方误差
自编码
可靠性(半导体)
深信不疑网络
特征提取
深度学习
模式识别(心理学)
机器学习
数据挖掘
统计
功率(物理)
数学
物理
量子力学
作者
Zheng Guokang,Yasong Li,Zheng Zhou,Ruqiang Yan
标识
DOI:10.1109/jiot.2024.3363610
摘要
Remaining useful life (RUL) prediction technology is a crucial task in prognostics and health management (PHM) systems, as it contributes to the enhancement of the reliability of equipment operation. With the development of Industrial Internet of Things (IIoT) technologies, it becomes possible to efficiently coordinate data collection for mechanical equipment, enabling real-time monitoring of device status and performance. This could provide more accurate estimations of the RUL. While current RUL prediction techniques predominantly rely on deep learning (DL), these approaches often neglect the temporal correlation within training samples, resulting in unstable prediction outcomes. To address this issue, a novel RUL prediction method is introduced, leveraging deep reinforcement learning (DRL). This method combines the effective feature extraction ability of DL with the preservation of temporal correlation between samples through reinforcement learning. Firstly, an autoencoder (AE) is employed to extract key features that are most relevant to degenerative process from the original signals collected from mechanical equipment. Secondly, state variables in reinforcement learning are constructed using the extracted features and the predicted RUL value of the sample at the previous time step. Finally, a deep reinforcement learning model based on the Twin Delayed Deep Deterministic Policy Gradient algorithm (TD3) is trained after setting an appropriate action space and reward function. Validation using XJTU-SY bearing dataset demonstrates that the DRL method yields lesser Root Mean Square Error (RMSE) and more stable prediction results compared to alternative methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI