计算机科学
模式
幻觉
水准点(测量)
人工智能
编码(集合论)
背景(考古学)
任务(项目管理)
光学(聚焦)
基本事实
强化学习
机器学习
语言模型
计算机视觉
经济
集合(抽象数据类型)
生物
管理
大地测量学
程序设计语言
地理
古生物学
社会学
社会科学
物理
光学
作者
Zhiqing Sun,Sheng Shen,Shengcao Cao,Haotian Liu,Chunyuan Li,Yikang Shen,Chuang Gan,Liang-Yan Gui,Yu-Xiong Wang,Yiming Yang,Kurt Keutzer,Trevor Darrell
出处
期刊:Cornell University - arXiv
日期:2023-09-25
被引量:11
标识
DOI:10.48550/arxiv.2309.14525
摘要
Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.
科研通智能强力驱动
Strongly Powered by AbleSci AI