强化学习
计算机科学
工程类
汽车工程
控制工程
人工智能
作者
J. Fan,Xiaodong Wu,Jie Li,Min Xu
出处
期刊:IEEE Transactions on Vehicular Technology
[Institute of Electrical and Electronics Engineers]
日期:2024-01-25
卷期号:73 (4): 4621-4635
被引量:4
标识
DOI:10.1109/tvt.2024.3358299
摘要
With vehicle-to-everything (V2X) information, connected and automated vehicle (CAV) eco-driving strategy allows the vehicle to plan its speed and choose the optimal lane based on actual conditions, resulting in improved driving performance. This study presents a novel eco-driving strategy framework based on deep reinforcement learning (DRL) techniques for CAVs driving in urban scenarios. This framework integrates longitudinal speed planning with lateral lane change decision-making and aims to co-optimize the energy efficiency, driving safety, and travel efficiency. By leveraging traffic information and multi-objective reward functions, the twin delayed deep deterministic (TD3) algorithm is employed to train the actor-critic (AC) network which generates both longitudinal and lateral control commands based on its estimation for lane preference score. The proposed strategy is tested in a complex urban scenario based on Simulation Urban Mobility (SUMO) which reflects real urban traffic conditions. Experimental results indicate that the longitudinal speed planning module of the proposed strategy can shorten the travel time by up to 7.94% or reduce the electricity consumption by 18.15%, depending on the degree of importance placed on economy by the TD3 agent. By integrating the lateral lane decision module, the proposed strategy can further shorten the travel time by 5.7% and save 1.75% energy consumption.
科研通智能强力驱动
Strongly Powered by AbleSci AI