计算机科学
人工智能
一般化
机器学习
背景(考古学)
集合(抽象数据类型)
班级(哲学)
编码(集合论)
领域(数学分析)
自然语言处理
程序设计语言
数学
生物
数学分析
古生物学
作者
Kaiyang Zhou,Jingkang Yang,Chen Change Loy,Ziwei Liu
标识
DOI:10.1109/cvpr52688.2022.01631
摘要
With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning—a recent trend in NLP—to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at https://github.com/KaiyangZhou/CoOp.
科研通智能强力驱动
Strongly Powered by AbleSci AI