计算机科学
分割
人工智能
推论
卷积神经网络
任务(项目管理)
交叉口(航空)
高级驾驶员辅助系统
计算机视觉
特征(语言学)
延迟(音频)
模式识别(心理学)
工程类
电信
哲学
航空航天工程
系统工程
语言学
作者
Tong Luo,Yanyan Chen,Tianyu Luan,Baixiang Cai,Long Chen,Hai Wang
出处
期刊:IEEE Transactions on Transportation Electrification
日期:2024-03-01
卷期号:10 (1): 1454-1464
被引量:2
标识
DOI:10.1109/tte.2023.3293495
摘要
High accuracy and quick response of environmental perception systems are crucial for the driving stability and safety of intelligent vehicles. In road scenes, dynamic traffic objects and static pavement information are essential components of perception systems. Current state-of-the-art entails constructing a separate network for each task and integrating outputs under multiple frameworks through post-processing, which results in high energy consumption and latency in perception systems. In this study, a unified and novel multi-task network (IDS-MODEL) for road scenes is proposed, which can simultaneously perform high-precision instance segmentation and drivable area segmentation in real-time. The proposed network uses an end-to-end convolutional neural network, which mainly consists of an optimized shared backbone and specific decoders designed for different tasks. The residual and attention mechanisms are first introduced to improve the feature extraction capability of the backbone. Depth-separable convolution is then utilized to reduce the number of parameters and increase computational efficiency. Besides, two parallel decoders with feature fusion modules and corresponding prediction heads are designed to simultaneously and efficiently extract important dynamic and static information from road scenes. Experimental results based on the autonomous driving dataset BDD100K demonstrate that the proposed multi-task model achieves a high level of accuracy of 18.74% mAP (mean average precision) on the instance masks and 83.63% mIoU (mean intersection over union) on the drivable areas, with an inference speed of 36 FPS. The qualitative results of real-vehicle experiments indicate that the proposed method adapts well to real-world scenes and has high accuracy and performance in real-time.
科研通智能强力驱动
Strongly Powered by AbleSci AI