计算机科学
微调
变压器
语言模型
人工智能
编码(集合论)
机器学习
电压
程序设计语言
量子力学
物理
集合(抽象数据类型)
作者
Menglin Jia,Luming Tang,Bor-Chun Chen,Claire Cardie,Serge Belongie,Bharath Hariharan,Ser-Nam Lim
标识
DOI:10.1007/978-3-031-19827-4_41
摘要
The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, i.e., full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost. Code is available at github.com/kmnp/vpt .
科研通智能强力驱动
Strongly Powered by AbleSci AI