强化学习
计算机科学
布线(电子设计自动化)
路由算法
计算机网络
算法
人工智能
分布式计算
机器学习
路由协议
标识
DOI:10.1109/tits.2024.3353258
摘要
The rapid growth of the Internet of Vehicles (IoV) has generated significant interest in routing techniques for vehicular ad hoc networks (VANETs) in both academic and industrial communities. To address the complexity of urban environments and dynamic vehicle mobility, we propose a hierarchical Q-learning-based routing algorithm with grouped roadside unit (RSU) for VANETs. RSUs are grouped, and a Q-vector containing group information is exchanged through vehicle-to-everything (V2X) communications. Q-vector-based road-segment (QVRS) control messages are periodically broadcasted to refresh the V2X evaluation metric, which considers vehicle positions, velocities, directions, and communication conditions. To adapt to the nonstationary vehicular environment, a multi-agent reinforcement learning (RL) algorithm is performed on RSUs at each intersection to achieve distributed learning and local decisions. The hierarchical Q-learning algorithm trains group Q-table and local Q-table individually for reaching destinations on each RSU. The optimal data routing behavior is conducted with two separate Q-tables by utilizing the integrated V2X metric as the reward function. Simulation results demonstrate that our proposed method reduces broadcasting overhead, prolongs path lifetime and maintains a high packet delivery ratio and low average end-to-end delay. The incorporation of group design in our method accelerates the learning process, which facilitates more efficient communication in VANETs.
科研通智能强力驱动
Strongly Powered by AbleSci AI