初始化
人工神经网络
梯度下降
偏微分方程
计算机科学
航程(航空)
数学优化
数学
应用数学
人工智能
数学分析
复合材料
材料科学
程序设计语言
作者
Jian Cheng Wong,Chin Chun Ooi,Abhishek Gupta,Yew-Soon Ong
出处
期刊:IEEE transactions on artificial intelligence
[Institute of Electrical and Electronics Engineers]
日期:2022-07-19
卷期号:5 (3): 985-1000
被引量:51
标识
DOI:10.1109/tai.2022.3192362
摘要
A physics-informed neural network (PINN) uses physics-augmented loss functions, e.g., incorporating the residual term from governing partial differential equations (PDEs), to ensure its output is consistent with fundamental physics laws. However, it turns out to be difficult to train an accurate PINN model for many problems in practice. In this paper, we present a novel perspective of the merits of learning in sinusoidal spaces with PINNs. By analyzing behavior at model initialization, we first show that a PINN of increasing expressiveness induces an initial bias around flat output functions. Notably, this initial solution can be very close to satisfying many physics PDEs, i.e., falling into a local minimum of the PINN loss that only minimizes PDE residuals, while still being far from the true solution that jointly minimizes PDE residuals and the initial and/or boundary conditions. It is difficult for gradient descent optimization to escape from such a local minimum trap, often causing the training to stall. We then prove that the sinusoidal mapping of inputs, in an architecture we label as sf-PINN, is effective to increase input gradient variability, thus avoiding being trapped in such deceptive local minimum. The level of variability can be effectively modulated to match high-frequency patterns in the problem at hand. A key facet of this paper is the comprehensive empirical study that demonstrates the efficacy of learning in sinusoidal spaces with PINNs for a wide range of forward and inverse modelling problems spanning multiple physics domains.
科研通智能强力驱动
Strongly Powered by AbleSci AI