计算机科学
时域
频域
代表(政治)
变压器
多元统计
算法
时间序列
计算复杂性理论
人工智能
机器学习
电压
物理
政治
量子力学
计算机视觉
法学
政治学
作者
Yushu Chen,Shengzhuo Liu,Jinzhe Yang,Jing Hao,Wenlai Zhao,Guangwen Yang
出处
期刊:Neural Networks
[Elsevier BV]
日期:2024-04-25
卷期号:176: 106334-106334
被引量:15
标识
DOI:10.1016/j.neunet.2024.106334
摘要
In order to enhance the performance of Transformer models for long-term multivariate forecasting while minimizing computational demands, this paper introduces the Joint Time-Frequency Domain Transformer (JTFT). JTFT combines time and frequency domain representations to make predictions. The frequency domain representation efficiently extracts multi-scale dependencies while maintaining sparsity by utilizing a small number of learnable frequencies. Simultaneously, the time domain (TD) representation is derived from a fixed number of the most recent data points, strengthening the modeling of local relationships and mitigating the effects of non-stationarity. Importantly, the length of the representation remains independent of the input sequence length, enabling JTFT to achieve linear computational complexity. Furthermore, a low-rank attention layer is proposed to efficiently capture cross-dimensional dependencies, thus preventing performance degradation resulting from the entanglement of temporal and channel-wise modeling. Experimental results on eight real-world datasets demonstrate that JTFT outperforms state-of-the-art baselines in predictive performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI