HumanEval-V: Evaluating Visual Understanding and Reasoning Abilities of Large Multimodal Models Through Coding Tasks

编码(社会科学) 视觉推理 计算机科学 预测编码 认知心理学 认知科学 心理学 人工智能 数学 统计
作者
Fengji Zhang,Lisa Y. Wu,Hui‐Yu Bai,Guancheng Lin,Xiao Li,Xiao Yu,Yue Wang,Bei Chen,Jacky Keung
出处
期刊:Cornell University - arXiv
标识
DOI:10.48550/arxiv.2410.12381
摘要

Coding tasks have been valuable for evaluating Large Language Models (LLMs), as they demand the comprehension of high-level instructions, complex reasoning, and the implementation of functional programs -- core capabilities for advancing Artificial General Intelligence. Despite the progress in Large Multimodal Models (LMMs), which extend LLMs with visual perception and understanding capabilities, there remains a notable lack of coding benchmarks that rigorously assess these models, particularly in tasks that emphasize visual reasoning. To address this gap, we introduce HumanEval-V, a novel and lightweight benchmark specifically designed to evaluate LMMs' visual understanding and reasoning capabilities through code generation. HumanEval-V includes 108 carefully crafted, entry-level Python coding tasks derived from platforms like CodeForces and Stack Overflow. Each task is adapted by modifying the context and algorithmic patterns of the original problems, with visual elements redrawn to ensure distinction from the source, preventing potential data leakage. LMMs are required to complete the code solution based on the provided visual context and a predefined Python function signature outlining the task requirements. Every task is equipped with meticulously handcrafted test cases to ensure a thorough and reliable evaluation of model-generated solutions. We evaluate 19 state-of-the-art LMMs using HumanEval-V, uncovering significant challenges. Proprietary models like GPT-4o achieve only 13% pass@1 and 36.4% pass@10, while open-weight models with 70B parameters score below 4% pass@1. Ablation studies further reveal the limitations of current LMMs in vision reasoning and coding capabilities. These results underscore key areas for future research to enhance LMMs' capabilities. We have open-sourced our code and benchmark at https://github.com/HumanEval-V/HumanEval-V-Benchmark.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科研通AI2S应助安晗默采纳,获得10
刚刚
Leo_Sun完成签到,获得积分10
1秒前
Yang完成签到,获得积分10
1秒前
1秒前
2秒前
2秒前
2秒前
完美世界应助自强不息采纳,获得10
2秒前
品品完成签到 ,获得积分10
2秒前
完美世界应助ZM采纳,获得10
2秒前
3秒前
QL发布了新的文献求助20
3秒前
执着易绿完成签到 ,获得积分10
3秒前
NexusExplorer应助huiliang采纳,获得10
5秒前
6秒前
ss发布了新的文献求助10
7秒前
7秒前
yuyu完成签到,获得积分10
8秒前
果果瑞宁发布了新的文献求助10
8秒前
脑洞疼应助思无邪采纳,获得10
8秒前
bkagyin应助兴奋小丸子采纳,获得10
9秒前
阔达语儿完成签到,获得积分10
9秒前
11秒前
啦啦啦发布了新的文献求助10
11秒前
今后应助yuyu采纳,获得10
12秒前
脑洞疼应助彳亍而行采纳,获得10
12秒前
李爱国应助blueming采纳,获得10
12秒前
科目三应助猪猪hero采纳,获得30
12秒前
13秒前
星辰大海应助祥子采纳,获得10
13秒前
zhangjiabin完成签到,获得积分10
13秒前
14秒前
赫连涵柏完成签到,获得积分0
16秒前
17秒前
17秒前
自强不息发布了新的文献求助10
18秒前
18秒前
20秒前
21秒前
21秒前
高分求助中
Les Mantodea de Guyane Insecta, Polyneoptera 2500
Mobilization, center-periphery structures and nation-building 600
Introduction to Strong Mixing Conditions Volumes 1-3 500
Technologies supporting mass customization of apparel: A pilot project 450
China—Art—Modernity: A Critical Introduction to Chinese Visual Expression from the Beginning of the Twentieth Century to the Present Day 430
Multichannel rotary joints-How they work 400
Tip60 complex regulates eggshell formation and oviposition in the white-backed planthopper, providing effective targets for pest control 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3795205
求助须知:如何正确求助?哪些是违规求助? 3340212
关于积分的说明 10299164
捐赠科研通 3056777
什么是DOI,文献DOI怎么找? 1677185
邀请新用户注册赠送积分活动 805246
科研通“疑难数据库(出版商)”最低求助积分说明 762409