计算机科学
分类器(UML)
人工智能
机器学习
加权
学习迁移
特征学习
模式识别(心理学)
医学
放射科
作者
Bingyi Kang,Sihong Xie,Marcus Rohrbach,Zhicheng Yan,Albert Gordo,Jiashi Feng,Yannis Kalantidis
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:372
标识
DOI:10.48550/arxiv.1910.09217
摘要
The long-tail distribution of the visual world poses great challenges for deep learning based classification models on how to handle the class imbalance problem. Existing solutions usually involve class-balancing strategies, e.g., by loss re-weighting, data re-sampling, or transfer learning from head- to tail-classes, but most of them adhere to the scheme of jointly learning representations and classifiers. In this work, we decouple the learning procedure into representation learning and classification, and systematically explore how different balancing strategies affect them for long-tailed recognition. The findings are surprising: (1) data imbalance might not be an issue in learning high-quality representations; (2) with representations learned with the simplest instance-balanced (natural) sampling, it is also possible to achieve strong long-tailed recognition ability by adjusting only the classifier. We conduct extensive experiments and set new state-of-the-art performance on common long-tailed benchmarks like ImageNet-LT, Places-LT and iNaturalist, showing that it is possible to outperform carefully designed losses, sampling strategies, even complex modules with memory, by using a straightforward approach that decouples representation and classification. Our code is available at https://github.com/facebookresearch/classifier-balancing.
科研通智能强力驱动
Strongly Powered by AbleSci AI