亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Calibrated Self-Rewarding Vision Language Models

计算机科学 心理学 认知心理学 人工智能 认知科学
作者
Yiyang Zhou,Zhiyuan Fan,Dongjie Cheng,Sihan Yang,Zhaorun Chen,Chenhang Cui,Xiyao Wang,Yun Li,Linjun Zhang,Huaxiu Yao
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2405.14622
摘要

Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning. Despite these advancements, LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image, indicating a misalignment between image and text pairs. This misalignment arises because the model tends to prioritize textual information over visual input, even when both the language model and visual representations are of high quality. Existing methods leverage additional models or human annotations to curate preference data and enhance modality alignment through preference optimization. These approaches may not effectively reflect the target LVLM's preferences, making the curated preferences easily distinguishable. Our work addresses these challenges by proposing the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning. In the reward modeling, we employ a step-wise strategy and incorporate visual constraints into the self-rewarding process to place greater emphasis on visual input. Empirical results demonstrate that CSR enhances performance and reduces hallucinations across ten benchmarks and tasks, achieving substantial improvements over existing methods by 7.62%. Our empirical results are further supported by rigorous theoretical analysis, under mild assumptions, verifying the effectiveness of introducing visual constraints into the self-rewarding paradigm. Additionally, CSR shows compatibility with different vision-language models and the ability to incrementally improve performance through iterative fine-tuning. Our data and code are available at https://github.com/YiyangZhou/CSR.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
2秒前
YangSihan发布了新的文献求助10
3秒前
sciq完成签到,获得积分10
3秒前
wakao发布了新的文献求助10
7秒前
9秒前
慕青应助YangSihan采纳,获得10
10秒前
WQY发布了新的文献求助10
12秒前
科研通AI5应助Juan_He采纳,获得10
16秒前
自然的衫完成签到 ,获得积分10
21秒前
24秒前
WQY完成签到,获得积分10
28秒前
36秒前
41秒前
yuki发布了新的文献求助10
47秒前
53秒前
Juan_He发布了新的文献求助10
59秒前
哈哈哈完成签到,获得积分10
1分钟前
南寅完成签到,获得积分10
1分钟前
洪亮完成签到,获得积分0
1分钟前
hgsgeospan完成签到,获得积分10
1分钟前
hgs完成签到,获得积分10
1分钟前
987完成签到 ,获得积分10
1分钟前
mashibeo完成签到,获得积分10
2分钟前
超级盼烟完成签到,获得积分10
2分钟前
neocc123完成签到 ,获得积分10
2分钟前
共享精神应助科研通管家采纳,获得10
2分钟前
2分钟前
CodeCraft应助科研通管家采纳,获得10
2分钟前
2分钟前
YangSihan发布了新的文献求助10
2分钟前
tuanheqi发布了新的文献求助20
2分钟前
YOUNG完成签到 ,获得积分10
2分钟前
nanfang完成签到 ,获得积分10
3分钟前
王勇发布了新的文献求助30
3分钟前
糖糖完成签到,获得积分10
3分钟前
3分钟前
科研通AI5应助冬天该很好采纳,获得10
3分钟前
李健应助糖糖采纳,获得10
3分钟前
3分钟前
SciGPT应助一只科研鼠采纳,获得10
3分钟前
高分求助中
Les Mantodea de Guyane Insecta, Polyneoptera 2500
Technologies supporting mass customization of apparel: A pilot project 450
A Field Guide to the Amphibians and Reptiles of Madagascar - Frank Glaw and Miguel Vences - 3rd Edition 400
A China diary: Peking 400
Brain and Heart The Triumphs and Struggles of a Pediatric Neurosurgeon 400
Cybersecurity Blueprint – Transitioning to Tech 400
Mixing the elements of mass customisation 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3784786
求助须知:如何正确求助?哪些是违规求助? 3330050
关于积分的说明 10244071
捐赠科研通 3045375
什么是DOI,文献DOI怎么找? 1671660
邀请新用户注册赠送积分活动 800544
科研通“疑难数据库(出版商)”最低求助积分说明 759483