油藏计算
动力系统理论
计算机科学
人工神经网络
人工智能
功能(生物学)
动力系统(定义)
机器学习
简单
自回归模型
循环神经网络
数学
计量经济学
物理
哲学
认识论
生物
进化生物学
量子力学
出处
期刊:Chaos
[American Institute of Physics]
日期:2021-01-01
卷期号:31 (1)
被引量:47
摘要
Machine learning has become a widely popular and successful paradigm, including in data-driven science and engineering. A major application problem is data-driven forecasting of future states from a complex dynamical. Artificial neural networks (ANN) have evolved as a clear leader amongst many machine learning approaches, and recurrent neural networks (RNN) are considered to be especially well suited for forecasting dynamical systems. In this setting, the echo state networks (ESN) or reservoir computer (RC) have emerged for their simplicity and computational complexity advantages. Instead of a fully trained network, an RC trains only read-out weights by a simple, efficient least squares method. What is perhaps quite surprising is that nonetheless an RC succeeds to make high quality forecasts, competitively with more intensively trained methods, even if not the leader. There remains an unanswered question as to why and how an RC works at all, despite randomly selected weights. We explicitly connect the RC with linear activation and linear read-out to well developed time-series literature on vector autoregressive averages (VAR) that includes theorems on representability through the WOLD theorem, which already perform reasonably for short term forecasts. In the case of a linear activation and now popular quadratic read-out RC, we explicitly connect to a nonlinear VAR (NVAR), which performs quite well. Further, we associate this paradigm to the now widely popular dynamic mode decomposition (DMD), and thus these three are in a sense different faces of the same thing. We illustrate our observations in terms of popular benchmark examples including Mackey-Glass differential delay equations and the Lorenz63 system.
科研通智能强力驱动
Strongly Powered by AbleSci AI