Learning to Prompt for Vision-Language Models

计算机科学 人工智能 边距(机器学习) 背景(考古学) 特征学习 特征工程 特征(语言学) 机器学习 自然语言处理 语言模型 学习迁移 代表(政治) 深度学习 古生物学 语言学 哲学 政治 政治学 法学 生物
作者
Kaiyang Zhou,Jingkang Yang,Chen Change Loy,Ziwei Liu
出处
期刊:International Journal of Computer Vision [Springer Science+Business Media]
卷期号:130 (9): 2337-2348 被引量:1138
标识
DOI:10.1007/s11263-022-01653-1
摘要

Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming -- one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt's context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Edison发布了新的文献求助10
刚刚
叶不言完成签到,获得积分10
刚刚
科研通AI5应助大胆的茗茗采纳,获得10
刚刚
1秒前
白糖发布了新的文献求助10
2秒前
2秒前
震动的念文完成签到,获得积分10
2秒前
2秒前
3秒前
4秒前
叶不言发布了新的文献求助10
4秒前
4秒前
wy.he应助HJJHJH采纳,获得10
5秒前
华仔应助萝卜脚踝采纳,获得10
5秒前
如故如故完成签到,获得积分10
5秒前
MOMO完成签到,获得积分10
6秒前
蟹蟹发布了新的文献求助10
6秒前
Nancy发布了新的文献求助30
6秒前
勤恳的浩阑完成签到,获得积分10
6秒前
细腻柜子发布了新的文献求助10
6秒前
小刘鸭鸭发布了新的文献求助10
7秒前
白糖完成签到,获得积分10
7秒前
鲤鱼完成签到,获得积分10
7秒前
7秒前
怕黑钢笔完成签到 ,获得积分10
8秒前
英姑应助yuanc采纳,获得10
8秒前
阿松发布了新的文献求助10
8秒前
8秒前
新楚完成签到 ,获得积分10
8秒前
一念初见发布了新的文献求助10
9秒前
。。。伟发布了新的文献求助30
9秒前
grace完成签到,获得积分10
9秒前
乐观的雨完成签到,获得积分10
9秒前
大模型应助瓜瓜叽叽采纳,获得10
9秒前
张某某完成签到,获得积分10
10秒前
bao应助敬老院N号采纳,获得10
10秒前
无餍应助敬老院N号采纳,获得10
10秒前
无餍应助敬老院N号采纳,获得10
10秒前
内向的八宝粥完成签到,获得积分10
10秒前
10秒前
高分求助中
Les Mantodea de Guyane Insecta, Polyneoptera 2500
Mobilization, center-periphery structures and nation-building 600
Technologies supporting mass customization of apparel: A pilot project 450
China—Art—Modernity: A Critical Introduction to Chinese Visual Expression from the Beginning of the Twentieth Century to the Present Day 430
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
A Field Guide to the Amphibians and Reptiles of Madagascar - Frank Glaw and Miguel Vences - 3rd Edition 400
China Gadabouts: New Frontiers of Humanitarian Nursing, 1941–51 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3793074
求助须知:如何正确求助?哪些是违规求助? 3337816
关于积分的说明 10287022
捐赠科研通 3054320
什么是DOI,文献DOI怎么找? 1675961
邀请新用户注册赠送积分活动 803951
科研通“疑难数据库(出版商)”最低求助积分说明 761615