系列(地层学)
计算机科学
安全性令牌
时间序列
突出
语言模型
缺少数据
弦(物理)
算法
人工智能
计量经济学
机器学习
数学
计算机安全
数学物理
生物
古生物学
作者
Nate Gruver,Marc Finzi,Shikai Qiu,Andrew Gordon Wilson
出处
期刊:Cornell University - arXiv
日期:2023-10-11
被引量:92
标识
DOI:10.48550/arxiv.2310.07820
摘要
By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text. Developing this approach, we find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks. To facilitate this performance, we propose procedures for effectively tokenizing time series data and converting discrete distributions over tokens into highly flexible densities over continuous values. We argue the success of LLMs for time series stems from their ability to naturally represent multimodal distributions, in conjunction with biases for simplicity, and repetition, which align with the salient features in many time series, such as repeated seasonal trends. We also show how LLMs can naturally handle missing data without imputation through non-numerical text, accommodate textual side information, and answer questions to help explain predictions. While we find that increasing model size generally improves performance on time series, we show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers, and poor uncertainty calibration, which is likely the result of alignment interventions such as RLHF.
科研通智能强力驱动
Strongly Powered by AbleSci AI