计算机科学
联营
变压器
多元统计
人工智能
建筑
机器学习
时间序列
数据挖掘
水准点(测量)
模式识别(心理学)
工程类
艺术
大地测量学
电压
地理
电气工程
视觉艺术
作者
Chao Yang,Xianzhi Wang,Lina Yao,Guodong Long,Guandong Xu
标识
DOI:10.1016/j.ins.2023.119881
摘要
Multivariate time series classification is a crucial task with applications in broad areas such as finance, medicine, and engineering. Transformer is promising for time series classification, but as a generic approach, they have limited capability to effectively capture the distinctive characteristics inherent in time series data and adapt to diverse architectural requirements. This paper proposes a novel dynamic transformer-based architecture called Dyformer to address the above limitations of traditional transformers in multivariate time series classification. Dyformer incorporates hierarchical pooling to decompose time series into subsequences with different frequency components. Then, it employs Dyformer modules to achieve adaptive learning strategies for different frequency components based on a dynamic architecture. Furthermore, we introduce feature-map-wise attention mechanisms to capture multi-scale temporal dependencies and a joint loss function to facilitate model training. To evaluate the performance of Dyformer, we conducted extensive experiments using 30 benchmark datasets. The results unequivocally demonstrate that our model consistently outperforms a multitude of state-of-the-art methods and baseline approaches. Our model also copes well with limited training samples when pre-trained.
科研通智能强力驱动
Strongly Powered by AbleSci AI