变压器
计算机科学
时间序列
时间分辨率
建筑
人工智能
模式识别(心理学)
系列(地层学)
数据挖掘
机器学习
工程类
地理
电压
古生物学
物理
考古
量子力学
电气工程
生物
作者
Yitian Zhang,Liheng Ma,Soumyasundar Pal,Yingxue Zhang,Mark Coates
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2311.04147
摘要
The performance of transformers for time-series forecasting has improved significantly. Recent architectures learn complex temporal patterns by segmenting a time-series into patches and using the patches as tokens. The patch size controls the ability of transformers to learn the temporal patterns at different frequencies: shorter patches are effective for learning localized, high-frequency patterns, whereas mining long-term seasonalities and trends requires longer patches. Inspired by this observation, we propose a novel framework, Multi-resolution Time-Series Transformer (MTST), which consists of a multi-branch architecture for simultaneous modeling of diverse temporal patterns at different resolutions. In contrast to many existing time-series transformers, we employ relative positional encoding, which is better suited for extracting periodic components at different scales. Extensive experiments on several real-world datasets demonstrate the effectiveness of MTST in comparison to state-of-the-art forecasting techniques.
科研通智能强力驱动
Strongly Powered by AbleSci AI