颂歌
人工神经网络
计算机科学
常微分方程
反向传播
非线性系统
解算器
循环神经网络
应用数学
梯度下降
微分方程
数学优化
贝叶斯优化
算法
人工智能
数学
数学分析
程序设计语言
物理
量子力学
作者
Marios Mattheakis,H. Joy,Pavlos Protopapas
标识
DOI:10.1142/s0218213023500306
摘要
There is a wave of interest in using physics-informed neural networks for solving differential equations. Most of the existing methods are based on feed-forward networks, while recurrent neural networks solvers have not been extensively explored. We introduce a reservoir computing (RC) architecture, an echo-state recurrent neural network capable of discovering approximate solutions that satisfy ordinary differential equations (ODEs). We suggest an approach to calculate time derivatives of recurrent neural network outputs without using back-propagation. The internal weights of an RC are fixed, while only a linear output layer is trained, yielding efficient training. However, RC performance strongly depends on finding the optimal hyper-parameters, which is a computationally expensive process. We use Bayesian optimization to discover optimal sets in a high-dimensional hyper-parameter space efficiently and numerically show that one set is robust and can be transferred to solve an ODE for different initial conditions and time ranges. A closed-form formula for the optimal output weights is derived to solve first-order linear equations in a one-shot backpropagation-free learning process. We extend the RC approach by solving nonlinear systems of ODEs using a hybrid optimization method consisting of gradient descent and Bayesian optimization. Evaluation of linear and nonlinear systems of equations demonstrates the efficiency of the RC ODE solver.
科研通智能强力驱动
Strongly Powered by AbleSci AI