计算机科学
语言模型
编码器
人工智能
变压器
自然语言处理
自然语言
图像(数学)
生成语法
量子力学
操作系统
物理
电压
作者
Junnan Li,Dongxu Li,Silvio Savarese,Steven C. H. Hoi
出处
期刊:Cornell University - arXiv
日期:2023-01-30
被引量:898
标识
DOI:10.48550/arxiv.2301.12597
摘要
The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
科研通智能强力驱动
Strongly Powered by AbleSci AI