排
控制(管理)
航空学
计算机科学
工程类
人工智能
作者
Junru Yang,Duanfeng Chu,Liping Lu,Zhenghua Meng,Kun Deng
标识
DOI:10.1177/09544070241240037
摘要
This paper proposes a model-data-driven control method for a human-leading vehicle platoon, comprising a human-driven vehicle (HDV) as the leader and connected automated vehicles (CAVs) as followers. Initially, a representative trajectory of HDVs is constructed using principal component analysis and the K-means clustering algorithm, which is utilized as training dataset. Subsequently, we propose a novel platooning method, named deep reinforcement learning with model-based guidance (DRLMG). The output of model predictive control (MPC) is integrated into the input state and reward function of the deep reinforcement learning (DRL) algorithm. The DRL algorithm benefits from guidance provided by MPC, leading to more optimal decision-making. To ensure safety and stability, a safety filter is designed using control barrier function and the control Lyapunov function. Simulation experiments with real-world driving data show that DRLMG outperforms MPC, reducing speed error, spacing error, and acceleration change rate by 17.9%, 53.7%, and 47.1%, respectively. In comparison to pure DRL, DRLMG increases spacing error by 6.5% but reduces speed error by 15.4% and acceleration change rate by 14.3%. The proposed method enhances DRL’s generalization capability, dampens traffic oscillations caused by the leading HDV, and guarantees driving safety and stability.
科研通智能强力驱动
Strongly Powered by AbleSci AI