强化学习
避碰
计算机科学
人工智能
碰撞
模拟
计算机安全
作者
Xiangkun He,Chen Lv,Xuewu Ji,Yahui Liu
标识
DOI:10.1109/cvci56766.2022.9964555
摘要
The challenging task of "intelligent vehicles" opens up a new frontier to enhancing traffic safety. However, how to determine driving behavior timely and effectively is one of the most crucial concerns, which directly affects vehicle's collision avoidance capability and dynamics stability, particularly in emergency scenarios. Here, this paper presents a novel model-based reinforcement learning (RL) solution for driving behavior decision-making of autonomous vehicles in emergency situations. Firstly, in order to generate initial training data, a rule-based expert system (ES) is designed by analyzing human drivers' emergency collision avoidance manipulations and tire dynamics characteristics. Secondly, an imitative learning (IL) algorithm is developed to clone ES's driving behavior through softmax classifier and mini-batch stochastic gradient descent (MSGD) method. Thirdly, A model-prediction-based Q(λ)-learning with function approximation is presented to determine driving policy online, which integrates vehicle system model and neural network model from IL. Finally, the results of both simulation and experiment show that our approach can effectively coordinate multiple motion control systems to improve collision avoidance capability and vehicle dynamics stability at or close to the driving limits.
科研通智能强力驱动
Strongly Powered by AbleSci AI