对抗制
计算机科学
记忆
机器学习
人工智能
稳健性(进化)
人工神经网络
生成语法
深层神经网络
一般化
简单(哲学)
缩小
数学
认识论
基因
数学分析
哲学
数学教育
生物化学
化学
程序设计语言
作者
Hongyi Zhang,Moustapha Cissé,Yann Dauphin,David López-Paz
出处
期刊:Cornell University - arXiv
日期:2017-01-01
被引量:4372
标识
DOI:10.48550/arxiv.1710.09412
摘要
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI