计算机科学
人工智能
运动规划
规划师
生成模型
强化学习
卷积神经网络
机器学习
生成语法
机器人
作者
Long Chen,Xuemin Hu,Wei Tian,Hong Wang,Dongpu Cao,Fei‐Yue Wang
标识
DOI:10.1109/jas.2018.7511186
摘要
Motion planning is one of the most significant technologies for autonomous driving. To make motion planning models able to learn from the environment and to deal with emergency situations, a new motion planning framework called as "parallel planning" is proposed in this paper. In order to generate sufficient and various training samples, artificial traffic scenes are firstly constructed based on the knowledge from the reality. A deep planning model which combines a convolutional neural network (CNN) with the Long Short-Term Memory module (LSTM) is developed to make planning decisions in an end-toend mode. This model can learn from both real and artificial traffic scenes and imitate the driving style of human drivers. Moreover, a parallel deep reinforcement learning approach is also presented to improve the robustness of planning model and reduce the error rate. To handle emergency situations, a hybrid generative model including a variational auto-encoder (VAE) and a generative adversarial network (GAN) is utilized to learn from virtual emergencies generated in artificial traffic scenes. While an autonomous vehicle is moving, the hybrid generative model generates multiple video clips in parallel, which correspond to different potential emergency scenarios. Simultaneously, the deep planning model makes planning decisions for both virtual and current real scenes. The final planning decision is determined by analysis of real observations. Leveraging the parallel planning approach, the planner is able to make rational decisions without heavy calculation burden when an emergency occurs.
科研通智能强力驱动
Strongly Powered by AbleSci AI