计算机科学
语言模型
隐藏字幕
任务(项目管理)
推论
变压器
人工智能
机器学习
前缀
编码(集合论)
点(几何)
自然语言处理
集合(抽象数据类型)
图像(数学)
物理
哲学
几何学
量子力学
语言学
经济
电压
管理
程序设计语言
数学
作者
Woojeong Jin,Yu Cheng,Yelong Shen,Weizhu Chen,Xiang Ren
标识
DOI:10.18653/v1/2022.acl-long.197
摘要
Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning.However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed.To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FewVLM, relatively smaller than recent few-shot learners.For FewVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM).Furthermore, we analyze the effect of diverse prompts for few-shot tasks.Experimental results on VQA show that FewVLM with prompt-based learning outperforms Frozen which is 31x larger than FewVLM by 18.2% point and achieves comparable results to a 246x larger model, PICa.In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance. Our code is publicly available at https://github.com/woojeongjin/FewVLM
科研通智能强力驱动
Strongly Powered by AbleSci AI