学习迁移
计算机科学
源代码
透视图(图形)
多样性(控制论)
编码(集合论)
数据源
人工智能
机器学习
情报检索
程序设计语言
集合(抽象数据类型)
作者
Saachi Jain,Hadi Salman,Alaa Khaddaj,Eric Wong,Sung Min Park,Aleksander Mądry
标识
DOI:10.1109/cvpr52729.2023.00352
摘要
It is commonly believed that in transfer learning including more pre-training data translates into better performance. However, recent evidence suggests that removing data from the source dataset can actually help too. In this work, we take a closer look at the role of the source dataset's composition in transfer learning and present a framework for probing its impact on downstream performance. Our framework gives rise to new capabilities such as pinpointing transfer learning brittleness as well as detecting pathologies such as data-leakage and the presence of misleading examples in the source dataset. In particular, we demonstrate that removing detrimental datapoints identified by our framework indeed improves transfer learning performance from ImageNet on a variety of target tasks. 1 1 Code is available at https://github.com/MadryLab/data-transfer
科研通智能强力驱动
Strongly Powered by AbleSci AI