记忆
深层神经网络
计算机科学
人工智能
一般化
深度学习
稳健性(进化)
正规化(语言学)
人工神经网络
机器学习
对抗制
噪音(视频)
数学
认知心理学
心理学
数学分析
图像(数学)
化学
基因
生物化学
作者
Devansh Arpit,Stanisław Jastrzębski,Nicolas Ballas,David W. Krueger,Emmanuel Bengio,Maxinder S Kanwal,Tegan Maharaj,Asja Fischer,Aaron Courville,Yoshua Bengio,Simon Lacoste-Julien
出处
期刊:International Conference on Machine Learning
日期:2017-08-06
卷期号:70: 233-242
被引量:234
摘要
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
科研通智能强力驱动
Strongly Powered by AbleSci AI