计算机科学
学习迁移
人工智能
分类器(UML)
变压器
深层神经网络
人工神经网络
机器学习
标记数据
任务(项目管理)
建筑
班级(哲学)
上下文图像分类
领域(数学分析)
模式识别(心理学)
图像(数学)
艺术
数学分析
物理
数学
管理
量子力学
电压
经济
视觉艺术
作者
Carl Doersch,Ankush Gupta,Andrew Zisserman
出处
期刊:Cornell University - arXiv
日期:2020-07-22
被引量:1
标识
DOI:10.48550/arxiv.2007.11498
摘要
Given new tasks with very little data$-$such as new classes in a classification problem or a domain shift in the input$-$performance of modern vision systems degrades remarkably quickly. In this work, we illustrate how the neural network representations which underpin modern vision systems are subject to supervision collapse, whereby they lose any information that is not necessary for performing the training task, including information that may be necessary for transfer to new tasks or domains. We then propose two methods to mitigate this problem. First, we employ self-supervised learning to encourage general-purpose features that transfer better. Second, we propose a novel Transformer based neural network architecture called CrossTransformers, which can take a small number of labeled images and an unlabeled query, find coarse spatial correspondence between the query and the labeled images, and then infer class membership by computing distances between spatially-corresponding features. The result is a classifier that is more robust to task and domain shift, which we demonstrate via state-of-the-art performance on Meta-Dataset, a recent dataset for evaluating transfer from ImageNet to many other vision datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI