Predicting visual attention of human drivers boosts the training speed and performance of Autonomous Vehicles

凝视 任务(项目管理) 人机交互 背景(考古学) 计算机科学 感知 模仿 形势意识 视觉搜索 强化学习 人工智能 心理学 工程类 航空航天工程 古生物学 神经科学 系统工程 生物 社会心理学
作者
A. Aldo Faisal
出处
期刊:Journal of Vision [Association for Research in Vision and Ophthalmology (ARVO)]
卷期号:21 (9): 2819-2819 被引量:1
标识
DOI:10.1167/jov.21.9.2819
摘要

Autonomous driving agents deal with a complex skilled task for which humans train for a long period of time, relying heavily on their sensory, cognitive, situational intelligence, and motor skills developed over years of their life. Autonomous driving is focused on end-to-end learning of driving commands, however, perception and understanding of the environment remains the most critical challenge when evaluating situations. This is even more relevant in urban scenes, as they contain various distractions acting as visual noise that hinder the agent from understanding the situation correctly. Humans develop the skill of visual focus and identification of task-relevant objects from an early age. Information extracted from human gaze relevant to environment context can help the agent with this perception problem, injecting a wealth of information about human decision-making behaviour and helping agents focus on task-relevant features and ignore irrelevant information. We combine human gaze and features of task-relevant instances to enhance perception systems for autonomous driving. We use a virtual reality headset with built-in eye-trackers for participants (n=9) to use, providing us with human driving gaze data. Based on this, we build a human driving visual attention predictor. Our integrated object detector identifies relevant instances on the road while human visual attention prediction determines which objects are most relevant to the human driving policy. We present the results of using this architecture with imitation learning and reinforcement learning autonomous driving agents, and compare them with baseline end-to-end methods, showing improved performance, 28% in imitation and 11% in reinforcement learning, accelerated training and explainable behaviour with our approach. Our results highlight the potential of human in-the-loop approaches for autonomous systems which, as opposed to end-to-end approaches, allow us to make use of human skills in creating AI that closes the loop by augmenting humans in an efficient and explainable manner.

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
动人的笑南发布了新的文献求助100
刚刚
刚刚
小二郎应助包容的以彤采纳,获得10
1秒前
2秒前
磷酸丙糖异构酶完成签到,获得积分10
3秒前
小泉完成签到 ,获得积分10
3秒前
4秒前
4秒前
称心水池发布了新的文献求助30
7秒前
潇潇雨歇发布了新的文献求助10
7秒前
科研通AI6应助微了个球采纳,获得10
8秒前
Ava应助微了个球采纳,获得30
8秒前
生动娩发布了新的文献求助10
8秒前
西红柿有股番茄味完成签到,获得积分10
9秒前
骆西西发布了新的文献求助10
11秒前
中中中中中呀完成签到,获得积分10
13秒前
李爱国应助远志采纳,获得10
14秒前
冷傲的绮琴完成签到,获得积分10
17秒前
酥瓜完成签到 ,获得积分10
17秒前
咸鱼发布了新的文献求助10
18秒前
潇潇雨歇发布了新的文献求助10
19秒前
20秒前
21秒前
李健的小迷弟应助张伟采纳,获得10
21秒前
21秒前
称心水池完成签到,获得积分10
22秒前
22秒前
24秒前
zyz发布了新的文献求助10
25秒前
沉静万声完成签到 ,获得积分10
25秒前
远志发布了新的文献求助10
26秒前
量子星尘发布了新的文献求助10
27秒前
潇潇雨歇发布了新的文献求助10
27秒前
华仔应助单纯的爆米花采纳,获得10
29秒前
张伟完成签到,获得积分10
32秒前
32秒前
远志完成签到,获得积分10
34秒前
35秒前
潇潇雨歇发布了新的文献求助10
35秒前
活泼巧曼发布了新的文献求助10
36秒前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Mechanics of Solids with Applications to Thin Bodies 5000
Encyclopedia of Agriculture and Food Systems Third Edition 2000
Clinical Microbiology Procedures Handbook, Multi-Volume, 5th Edition 临床微生物学程序手册,多卷,第5版 2000
人脑智能与人工智能 1000
King Tyrant 720
Principles of Plasma Discharges and Materials Processing, 3rd Edition 400
热门求助领域 (近24小时)
化学 材料科学 生物 医学 工程类 计算机科学 有机化学 物理 生物化学 纳米技术 复合材料 内科学 化学工程 人工智能 催化作用 遗传学 数学 基因 量子力学 物理化学
热门帖子
关注 科研通微信公众号,转发送积分 5599366
求助须知:如何正确求助?哪些是违规求助? 4684972
关于积分的说明 14837354
捐赠科研通 4667915
什么是DOI,文献DOI怎么找? 2537906
邀请新用户注册赠送积分活动 1505398
关于科研通互助平台的介绍 1470783