计算机科学
模式
模块化设计
灵活性(工程)
人工智能
变压器
多模式学习
机器学习
深度学习
人机交互
社会科学
统计
物理
数学
量子力学
电压
社会学
操作系统
作者
Yaowei Li,Ruijie Quan,Linchao Zhu,Yi Yang
标识
DOI:10.1109/cvpr52729.2023.00256
摘要
Large-scale pre-training has brought unimodal fields such as computer vision and natural language processing to a new era. Following this trend, the size of multimodal learning models constantly increases, leading to an urgent need to reduce the massive computational cost of finetuning these models for downstream tasks. In this paper, we propose an efficient and flexible multimodal fusion method, namely PMF, tailored for fusing unimodally pretrained transformers. Specifically, we first present a modular multimodal fusion framework that exhibits high flexibility and facilitates mutual interactions among different modalities. In addition, we disentangle vanilla prompts into three types in order to learn different optimizing objectives for multimodal learning. It is also worth noting that we propose to add prompt vectors only on the deep layers of the unimodal transformers, thus significantly reducing the training memory usage. Experiment results show that our proposed method achieves comparable performance to several other multimodal finetuning methods with less than 3% trainable parameters and up to 66% saving of training memory usage.
科研通智能强力驱动
Strongly Powered by AbleSci AI