最大值和最小值
人工神经网络
一般化
随机梯度下降算法
极限(数学)
趋同(经济学)
计算机科学
梯度下降
非线性系统
功能(生物学)
数学优化
应用数学
领域(数学)
数学
人工智能
物理
纯数学
数学分析
生物
进化生物学
量子力学
经济
经济增长
作者
Mei Song,Andrea Montanari,Phan-Minh Nguyen
标识
DOI:10.1073/pnas.1806579115
摘要
Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires optimizing a nonconvex high-dimensional objective (risk function), a problem that is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the former case, does this happen because local minima are absent or because SGD somehow avoids them? In the latter, why do local minima reached by SGD have good generalization properties? In this paper, we consider a simple case, namely two-layer neural networks, and prove that-in a suitable scaling limit-SGD dynamics is captured by a certain nonlinear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows for "averaging out" some of the complexities of the landscape of neural networks and can be used to prove a general convergence result for noisy SGD.
科研通智能强力驱动
Strongly Powered by AbleSci AI