端到端原则
计算机科学
人工智能
深度学习
传感器融合
水准点(测量)
人工神经网络
一般化
计算机视觉
大地测量学
数学
数学分析
地理
作者
Zhiyu Huang,Chen Lv,Yang Xing,Jingda Wu
出处
期刊:IEEE Sensors Journal
[Institute of Electrical and Electronics Engineers]
日期:2020-06-17
卷期号:21 (10): 11781-11790
被引量:161
标识
DOI:10.1109/jsen.2020.3003121
摘要
This study aims to improve the performance and generalization capability of
\nend-to-end autonomous driving with scene understanding leveraging deep learning
\nand multimodal sensor fusion techniques. The designed end-to-end deep neural
\nnetwork takes as input the visual image and associated depth information in an
\nearly fusion level and outputs the pixel-wise semantic segmentation as scene
\nunderstanding and vehicle control commands concurrently. The end-to-end deep
\nlearning-based autonomous driving model is tested in high-fidelity simulated
\nurban driving conditions and compared with the benchmark of CoRL2017 and
\nNoCrash. The testing results show that the proposed approach is of better
\nperformance and generalization ability, achieving a 100% success rate in static
\nnavigation tasks in both training and unobserved situations, as well as better
\nsuccess rates in other tasks than the prior models. A further ablation study
\nshows that the model with the removal of multimodal sensor fusion or scene
\nunderstanding pales in the new environment because of the false perception. The
\nresults verify that the performance of our model is improved by the synergy of
\nmultimodal sensor fusion with scene understanding subtask, demonstrating the
\nfeasibility and effectiveness of the developed deep neural network with
\nmultimodal sensor fusion.
科研通智能强力驱动
Strongly Powered by AbleSci AI