已入深夜,您辛苦了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!祝你早点完成任务,早点休息,好梦!

Research on multitask model of object detection and road segmentation in unstructured road scenes

分割 计算机科学 计算机视觉 人工智能 路线图 对象(语法) 地图学 地理
作者
Chengfei Gao,Fengkui Zhao,Yong Zhang,Maosong Wan
出处
期刊:Measurement Science and Technology [IOP Publishing]
卷期号:35 (6): 065113-065113 被引量:4
标识
DOI:10.1088/1361-6501/ad35dd
摘要

Abstract With the rapid development of artificial intelligence and computer vision technology, autonomous driving technology has become a hot area of concern. The driving scenarios of autonomous vehicles can be divided into structured scenarios and unstructured scenarios. Compared with structured scenes, unstructured road scenes lack the constraints of lane lines and traffic rules, and the safety awareness of traffic participants is weaker. Therefore, there are new and higher requirements for the environment perception tasks of autonomous vehicles in unstructured road scenes. The current research rarely integrates the target detection and road segmentation to achieve the simultaneous processing of target detection and road segmentation of autonomous vehicle in unstructured road scenes. Aiming at the above issues, a multitask model for object detection and road segmentation in unstructured road scenes is proposed. Through the sharing and fusion of the object detection model and road segmentation model, multitask model can complete the tasks of multi-object detection and road segmentation in unstructured road scenes while inputting a picture. Firstly, MobileNetV2 is used to replace the backbone network of YOLOv5, and multi-scale feature fusion is used to realize the information exchange layer between different features. Subsequently, a road segmentation model was designed based on the DeepLabV3+ algorithm. Its main feature is that it uses MobileNetV2 as the backbone network and combines the binary classification focus loss function for network optimization. Then, we fused the object detection algorithm and road segmentation algorithm based on the shared MobileNetV2 network to obtain a multitask model and trained it on both the public dataset and the self-built dataset NJFU. The training results demonstrate that the multitask model significantly enhances the algorithm’s execution speed by approximately 10 frames per scond while maintaining the accuracy of object detection and road segmentation. Finally, we conducted validation of the multitask model on an actual vehicle.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
xxxxx完成签到 ,获得积分10
5秒前
伊蕾娜完成签到 ,获得积分10
6秒前
8秒前
晁小盖完成签到 ,获得积分10
8秒前
wenhao完成签到 ,获得积分10
10秒前
研友_VZG7GZ应助jxcandice采纳,获得30
11秒前
11秒前
HYQ完成签到 ,获得积分10
12秒前
钉钉发布了新的文献求助10
12秒前
科研通AI5应助Saven采纳,获得10
12秒前
累狗刘完成签到,获得积分10
12秒前
量子星尘发布了新的文献求助10
12秒前
卡卡西西西完成签到,获得积分10
13秒前
岁和景明完成签到 ,获得积分10
16秒前
晁小盖关注了科研通微信公众号
20秒前
m李完成签到 ,获得积分10
21秒前
青糯完成签到 ,获得积分10
22秒前
wushuimei完成签到 ,获得积分10
27秒前
SSSSCCCCIIII完成签到,获得积分10
29秒前
哈哈完成签到 ,获得积分10
34秒前
斯文败类应助Nicole采纳,获得10
35秒前
纯情的无色完成签到 ,获得积分10
38秒前
39秒前
布曲完成签到 ,获得积分10
45秒前
45秒前
嘿嘿发布了新的文献求助10
45秒前
46秒前
fanyuan完成签到,获得积分10
47秒前
量子星尘发布了新的文献求助10
50秒前
Nicole发布了新的文献求助10
51秒前
爆米花应助晞晞加油干采纳,获得10
53秒前
ma发布了新的文献求助10
53秒前
科研通AI5应助Saven采纳,获得10
54秒前
12258发布了新的文献求助10
54秒前
Hello应助嘿嘿采纳,获得30
54秒前
科研通AI5应助Pbuitf采纳,获得10
55秒前
Hans应助现代的凡梅采纳,获得30
56秒前
DYXX完成签到 ,获得积分10
58秒前
元欣完成签到 ,获得积分10
59秒前
本尼脸上褶子完成签到 ,获得积分10
1分钟前
高分求助中
【提示信息,请勿应助】请使用合适的网盘上传文件 10000
The Oxford Encyclopedia of the History of Modern Psychology 1500
Green Star Japan: Esperanto and the International Language Question, 1880–1945 800
Sentimental Republic: Chinese Intellectuals and the Maoist Past 800
The Martian climate revisited: atmosphere and environment of a desert planet 800
Parametric Random Vibration 800
Semiconductor devices : pioneering papers 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3862312
求助须知:如何正确求助?哪些是违规求助? 3404851
关于积分的说明 10641763
捐赠科研通 3128089
什么是DOI,文献DOI怎么找? 1725102
邀请新用户注册赠送积分活动 830798
科研通“疑难数据库(出版商)”最低求助积分说明 779453