计算机科学
人工智能
交叉熵
杠杆(统计)
超参数
机器学习
边距(机器学习)
嵌入
稳健性(进化)
特征学习
模式识别(心理学)
自然语言处理
生物化学
化学
基因
作者
Prannay Khosla,Piotr Teterwak,Chen Wang,Aaron Sarna,Yonglong Tian,Phillip Isola,Aaron Maschinot,Ce Liu,Dilip Krishnan
出处
期刊:Cornell University - arXiv
日期:2020-01-01
被引量:2021
标识
DOI:10.48550/arxiv.2004.11362
摘要
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at https://t.ly/supcon.
科研通智能强力驱动
Strongly Powered by AbleSci AI