串联(数学)
计算机科学
系列(地层学)
背景(考古学)
计算复杂性理论
序列(生物学)
时间序列
算法
理论计算机科学
算术
机器学习
数学
遗传学
生物
古生物学
作者
Jie Yin,Chao Meng,Chongfeng Zhang,Ming Zhang,Ting Xue,Tao Zhang
标识
DOI:10.1109/icac57885.2023.10275215
摘要
Time series widely exist in the real world, and a large part of them are long time series, such as weather information records and industrial production information records. The inherent long-term data dependence of long-time series has extremely high requirements on the feature extraction ability of the model. The sequence length of long time series also directly causes high computational cost, which requires the model to be more efficient. This paper proposes Concatenation-Informer containing a Pre-distilling operation and a Concatenation-Attention operation to predict long time series. The pre-distilling operation reduces the length of the series and effectively extracts context-related features. The Concatenation-Attention operation concatenates the attention mechanism's input and output to improve the efficiency of parameters. The total space complexity of the Concatenation-Informer is less than the complexity and usage of the Informer.
科研通智能强力驱动
Strongly Powered by AbleSci AI