计算机科学
人工智能
分类器(UML)
机器学习
匹配(统计)
特征学习
监督学习
代表(政治)
转化(遗传学)
自然语言处理
模式识别(心理学)
人工神经网络
数学
基因
统计
政治
生物化学
化学
法学
政治学
作者
Ting Chen,Simon Kornblith,Mohammad Norouzi,Geoffrey E. Hinton
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:6105
标识
DOI:10.48550/arxiv.2002.05709
摘要
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
科研通智能强力驱动
Strongly Powered by AbleSci AI