计算机科学
时间序列
核(代数)
可扩展性
算法
水准点(测量)
系列(地层学)
多元统计
数据挖掘
机器学习
数学
古生物学
大地测量学
组合数学
数据库
生物
地理
作者
Zhao Yanjun,Zeyuan Ma,Tian Zhou,Mengni Ye,Liang Sun,Yi Qian
标识
DOI:10.1145/3583780.3615136
摘要
Transformer-based models have emerged as promising tools for time series forecasting. However, these models cannot make accurate prediction for long input time series. On the one hand, they failed to capture long-range dependency within time series data. On the other hand, the long input sequence usually leads to large model size and high time complexity. To address these limitations, we present GCformer, which combines a structured global convolutional branch for processing long input sequences with a local Transformer-based branch for capturing short, recent signals. A cohesive framework for a global convolution kernel has been introduced, utilizing three distinct parameterization methods. The selected structured convolutional kernel in the global branch has been specifically crafted with sublinear complexity, thereby allowing for the efficient and effective processing of lengthy and noisy input signals. Empirical studies on six benchmark datasets demonstrate that GCformer outperforms state-of-the-art methods, reducing MSE error in multivariate time series benchmarks by 4.38% and model parameters by 61.92%. In particular, the global convolutional branch can serve as a plug-in block to enhance the performance of other models, with an average improvement of 31.93%, including various recently published Transformer-based models. Our code is publicly available at https://github.com/Yanjun-Zhao/GCformer.
科研通智能强力驱动
Strongly Powered by AbleSci AI