计算机科学
稳健性(进化)
人工智能
编码(集合论)
噪音(视频)
机器学习
过程(计算)
训练集
培训(气象学)
模式识别(心理学)
图像(数学)
操作系统
物理
气象学
基因
集合(抽象数据类型)
化学
程序设计语言
生物化学
作者
Qizhe Xie,Minh-Thang Luong,Eduard Hovy,Quoc V. Le
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:103
标识
DOI:10.48550/arxiv.1911.04252
摘要
We present Noisy Student Training, a semi-supervised learning approach that works well even when labeled data is abundant. Noisy Student Training achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. Noisy Student Training extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. On ImageNet, we first train an EfficientNet model on labeled images and use it as a teacher to generate pseudo labels for 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the learning of the student, we inject noise such as dropout, stochastic depth, and data augmentation via RandAugment to the student so that the student generalizes better than the teacher. Models are available at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet. Code is available at https://github.com/google-research/noisystudent.
科研通智能强力驱动
Strongly Powered by AbleSci AI