计算机科学
人工智能
系列(地层学)
地平线
机器学习
计量经济学
时间序列
融合
经济
数学
地质学
哲学
古生物学
几何学
语言学
作者
Bryan Lim,Sercan Ö. Arık,Nicolas Loeff,Tomas Pfister
标识
DOI:10.1016/j.ijforecast.2021.03.012
摘要
Multi-horizon forecasting often contains a complex mix of inputs – including static (i.e. time-invariant) covariates, known future inputs, and other exogenous time series that are only observed in the past – without any prior information on how they interact with the target. Several deep learning methods have been proposed, but they are typically 'black-box' models that do not shed light on how they use the full range of inputs present in practical scenarios. In this paper, we introduce the Temporal Fusion Transformer (TFT) – a novel attention-based architecture that combines high-performance multi-horizon forecasting with interpretable insights into temporal dynamics. To learn temporal relationships at different scales, TFT uses recurrent layers for local processing and interpretable self-attention layers for long-term dependencies. TFT utilizes specialized components to select relevant features and a series of gating layers to suppress unnecessary components, enabling high performance in a wide range of scenarios. On a variety of real-world datasets, we demonstrate significant performance improvements over existing benchmarks, and highlight three practical interpretability use cases of TFT.
科研通智能强力驱动
Strongly Powered by AbleSci AI