学习迁移
计算机科学
变压器
深度学习
卷积神经网络
编码器
稳健性(进化)
模式识别(心理学)
传递函数
分类器(UML)
人工智能
人工神经网络
电气工程
工程类
机器学习
电压
化学
操作系统
基因
生物化学
作者
Xinglong Pei,Xiaoyang Zheng,Jian‐Ying Wu
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:70: 1-11
被引量:15
标识
DOI:10.1109/tim.2021.3119137
摘要
Owing to complex operational and measurement conditions, the data available to realize the effective training of deep models are often inadequate. Compared with traditional deep networks, Transformer exhibits a unique and excellent pattern recognition ability and has thus emerged as the de facto standard for processing tasks in many research fields. However, the application of Transformer architectures to fault diagnosis remains limited. To overcome these limitations and achieve highly accurate fault diagnosis, a novel Transformer convolution network (TCN) based on transfer learning is proposed. First, signal data are split into fixed-size patches, and the sequence of the linear embeddings of these patches is used as an input to a Transformer encoder. Subsequently, a convolutional neural network with a classifier layer is constructed to decode and classify patterns. The TCN is pretrained in the source domain and fine-tuned in the target domain by using a transfer learning strategy. Experiments to diagnose rotating machinery faults are conducted using bearing and gearbox datasets. The average diagnostic results for four transfer experiments are 99.71%, 99.97%, 99.83%, and 100.00%, and the proposed approach significantly outperforms state-of-the-art methods. The results demonstrate the exceptional robustness and effectiveness of the proposed method.
科研通智能强力驱动
Strongly Powered by AbleSci AI