正规化(语言学)
一般化
计算机科学
人工智能
人工神经网络
深层神经网络
提前停车
深度学习
泛化误差
卷积神经网络
简单(哲学)
算法
机器学习
数学
数学分析
哲学
认识论
作者
Chiyuan Zhang,Samy Bengio,Moritz Hardt,Benjamin Recht,Oriol Vinyals
出处
期刊:Cornell University - arXiv
日期:2016-01-01
被引量:2498
标识
DOI:10.48550/arxiv.1611.03530
摘要
Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models.
科研通智能强力驱动
Strongly Powered by AbleSci AI