清晨好,您是今天最早来到科研通的研友!由于当前在线用户较少,发布求助请尽量完整地填写文献信息,科研通机器人24小时在线,伴您科研之路漫漫前行!

Transitioning to multi-dimensional estimation of visual distraction and its safety effects under automated driving: A spatiotemporal and directional estimation approach

分散注意力 计算机科学 毒物控制 目视检查 计算机视觉 模拟 心理学 认知心理学 医学 环境卫生
作者
Song Wang,Zhixia Li,Chao Zeng,Jia Hu
出处
期刊:Transportation Research Part C-emerging Technologies [Elsevier BV]
卷期号:153: 104212-104212 被引量:1
标识
DOI:10.1016/j.trc.2023.104212
摘要

Traditional methodologies for measuring visual distraction have been limiting in their approaches, treating distraction as a one-dimensional variable. This has been accomplished either by categorizing distraction as a binary variable or using surrogate measurements, such as reaction time in response to non-driving related tasks, which follow the procedures of measuring distraction from psychology. Furthermore, as human-vehicle interaction (HVI) under automated driving has the potential to provide safety information and bring visual distraction simultaneously, there lacks an investigation on the quantitative relationship between distraction and safety due to the restrained methodologies in measuring visual distraction. As HVI-induced driver distraction plays a critical role in determining safe driving under automated driving, a methodology is highly needed to comprehensively measure the visual distraction under automated driving so that its impact on safety can be further investigated. Therefore, sticking to the definition of visual distraction, this research aims to (1) improve the existing methodology in measuring HVI-induced driver distraction under automated driving by focusing on visual distraction and mathematically describing it from spatiotemporal and directional dimensions; (2) theoretically investigate how the quantified visual distraction influences driving safety and (3) quantify the relationship between distraction and safety with introducing the “distraction-safety” ratio. Drivers’ fixation behaviors are used in quantifying visual distraction, which is measured by spatiotemporal and directional relationships between drivers’ visual attention and the attention that indicates “zero distraction”. Three newly added performance measures in quantifying HVI-induced visual distraction are real-time magnitudes, real-time directions, and intensities (cumulation of magnitudes over time). To validate the proposed methods, a verification study was conducted by recruiting drivers to test automated driving under Level 3 automation. Drivers are required to wear an eye-tracker and go through two scenarios interacting with jaywalkers where takeover actions are needed with two takeover warnings (“visual-only” and “visual & audible”). Per past studies, takeover time was measured in representing distraction level. As a result, this study confirms the validity of the proposed methods by revealing the significant and positive correlations between the measured distraction intensity and the takeover time. Discussions of the quantified visual distraction from the spatiotemporal and directional perspective further enhance the understanding of HVI-induced driver distraction under automated driving through multiple dimensions. Furthermore, this research reveals how distracted driving affects safety under automated driving through the takeover performance. Thresholds under visual distraction magnitudes and degrees that lead to traffic conflicts were identified. More importantly, a “distraction-safety” ratio that quantifies the relationship between visual distraction and safety benefits is proposed. The results suggest that the “visual & audible” is more effective in significantly enhancing safety while aggregating a significantly smaller amount of visual distraction. The contribution of this research is to re-define the methodological approach of measuring visual distraction by (1) measuring distraction multi-dimensionally and (2) establishing a “distraction-safety” system in quantitatively assessing the balance of safety benefits and the HVI-induced visual distraction magnitude under automated driving environment.

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
sadh2完成签到 ,获得积分10
17秒前
王平安完成签到 ,获得积分10
25秒前
cadcae完成签到,获得积分10
29秒前
柒柒球完成签到 ,获得积分10
47秒前
研友_ZzrWKZ完成签到 ,获得积分10
54秒前
耕牛热完成签到,获得积分10
1分钟前
且行丶且努力完成签到,获得积分10
1分钟前
Yingkun_Xu完成签到,获得积分10
1分钟前
Lemenchichi完成签到,获得积分10
1分钟前
1分钟前
赘婿应助香蕉以菱采纳,获得10
1分钟前
盼盼完成签到,获得积分10
1分钟前
小致发布了新的文献求助10
1分钟前
小致完成签到,获得积分10
2分钟前
cgs完成签到 ,获得积分10
2分钟前
俭朴宛丝发布了新的文献求助10
2分钟前
眯眯眼的安雁完成签到 ,获得积分10
2分钟前
外向的妍完成签到,获得积分10
2分钟前
林克完成签到,获得积分10
3分钟前
木南完成签到 ,获得积分10
3分钟前
胡国伦完成签到 ,获得积分10
4分钟前
kuan_完成签到 ,获得积分10
4分钟前
小蘑菇应助酷酷皮卡丘采纳,获得10
4分钟前
4分钟前
hhhhhh发布了新的文献求助10
4分钟前
lucky完成签到 ,获得积分10
5分钟前
keyan完成签到,获得积分10
5分钟前
5分钟前
5分钟前
点点完成签到,获得积分10
5分钟前
genau000完成签到 ,获得积分10
5分钟前
tianshanfeihe完成签到 ,获得积分10
6分钟前
牛仔完成签到 ,获得积分10
6分钟前
龙腾岁月完成签到 ,获得积分10
6分钟前
嗨喽完成签到,获得积分10
6分钟前
酷酷皮卡丘完成签到 ,获得积分10
6分钟前
xiang完成签到,获得积分10
6分钟前
changfox完成签到,获得积分10
6分钟前
泽锦臻完成签到 ,获得积分10
6分钟前
liuye0202完成签到,获得积分10
6分钟前
高分求助中
(应助此贴封号)【重要!!请各用户(尤其是新用户)详细阅读】【科研通的精品贴汇总】 10000
Les Mantodea de Guyane Insecta, Polyneoptera 2000
Leading Academic-Practice Partnerships in Nursing and Healthcare: A Paradigm for Change 800
Signals, Systems, and Signal Processing 610
Research Methods for Business: A Skill Building Approach, 9th Edition 500
Research Methods for Applied Linguistics 500
Picture Books with Same-sex Parented Families Unintentional Censorship 444
热门求助领域 (近24小时)
化学 材料科学 医学 生物 纳米技术 工程类 有机化学 化学工程 生物化学 计算机科学 物理 内科学 复合材料 催化作用 物理化学 光电子学 电极 细胞生物学 基因 无机化学
热门帖子
关注 科研通微信公众号,转发送积分 6413994
求助须知:如何正确求助?哪些是违规求助? 8232634
关于积分的说明 17476500
捐赠科研通 5466650
什么是DOI,文献DOI怎么找? 2888478
邀请新用户注册赠送积分活动 1865239
关于科研通互助平台的介绍 1703214