强化学习
加速度
航程(航空)
功能(生物学)
控制(管理)
模拟
计算机科学
工程类
汽车工程
钢筋
人工智能
结构工程
航空航天工程
经典力学
进化生物学
生物
物理
作者
Meixin Zhu,Yinhai Wang,Ziyuan Pu,Jingyun Hu,Xuesong Wang,Ruimin Ke
标识
DOI:10.1016/j.trc.2020.102662
摘要
A model used for velocity control during car following was proposed based on deep reinforcement learning (RL). To fulfil the multi-objectives of car following, a reward function reflecting driving safety, efficiency, and comfort was constructed. With the reward function, the RL agent learns to control vehicle speed in a fashion that maximizes cumulative rewards, through trials and errors in the simulation environment. A total of 1,341 car-following events extracted from the Next Generation Simulation (NGSIM) dataset were used to train the model. Car-following behavior produced by the model were compared with that observed in the empirical NGSIM data, to demonstrate the model's ability to follow a lead vehicle safely, efficiently, and comfortably. Results show that the model demonstrates the capability of safe, efficient, and comfortable velocity control in that it 1) has small percentages (8\%) of dangerous minimum time to collision values (\textless\ 5s) than human drivers in the NGSIM data (35\%); 2) can maintain efficient and safe headways in the range of 1s to 2s; and 3) can follow the lead vehicle comfortably with smooth acceleration. The results indicate that reinforcement learning methods could contribute to the development of autonomous driving systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI