强化学习
计算机科学
稳健性(进化)
可扩展性
估计员
趋同(经济学)
数学优化
信号(编程语言)
人工智能
数学
基因
经济增长
生物化学
化学
统计
数据库
经济
程序设计语言
作者
Xiaoqiang Wang,Liangjun Ke,Zhimin Qiao,Xinghua Chai
标识
DOI:10.1109/tcyb.2020.3015811
摘要
Finding the optimal signal timing strategy is a difficult task for the problem of large-scale traffic signal control (TSC). Multi-Agent Reinforcement Learning (MARL) is a promising method to solve this problem. However, there is still room for improvement in extending to large-scale problems and modeling the behaviors of other agents for each individual agent. In this paper, a new MARL, called Cooperative double Q-learning (Co-DQL), is proposed, which has several prominent features. It uses a highly scalable independent double Q-learning method based on double estimators and the UCB policy, which can eliminate the over-estimation problem existing in traditional independent Q-learning while ensuring exploration. It uses mean field approximation to model the interaction among agents, thereby making agents learn a better cooperative strategy. In order to improve the stability and robustness of the learning process, we introduce a new reward allocation mechanism and a local state sharing method. In addition, we analyze the convergence properties of the proposed algorithm. Co-DQL is applied on TSC and tested on a multi-traffic signal simulator. According to the results obtained on several traffic scenarios, Co- DQL outperforms several state-of-the-art decentralized MARL algorithms. It can effectively shorten the average waiting time of the vehicles in the whole road system.
科研通智能强力驱动
Strongly Powered by AbleSci AI