初始化
极限(数学)
人工神经网络
数学
随机神经网络
随机梯度下降算法
限制
趋同(经济学)
梯度下降
随机微分方程
应用数学
渐近分析
功能(生物学)
数学优化
计算机科学
循环神经网络
人工智能
数学分析
生物
工程类
机械工程
进化生物学
经济
经济增长
程序设计语言
作者
Justin Sirignano,Konstantinos Spiliopoulos
标识
DOI:10.1287/moor.2020.1118
摘要
We analyze multilayer neural networks in the asymptotic regime of simultaneously (a) large network sizes and (b) large numbers of stochastic gradient descent training iterations. We rigorously establish the limiting behavior of the multilayer neural network output. The limit procedure is valid for any number of hidden layers, and it naturally also describes the limiting behavior of the training loss. The ideas that we explore are to (a) take the limits of each hidden layer sequentially and (b) characterize the evolution of parameters in terms of their initialization. The limit satisfies a system of deterministic integro-differential equations. The proof uses methods from weak convergence and stochastic analysis. We show that, under suitable assumptions on the activation functions and the behavior for large times, the limit neural network recovers a global minimum (with zero loss for the objective function).
科研通智能强力驱动
Strongly Powered by AbleSci AI