强化学习
计算机科学
多智能体系统
控制(管理)
人工智能
控制工程
工程类
作者
Linfang Yan,Xia Chen,Yin Chen,Jinyu Wen
标识
DOI:10.1109/tii.2022.3152218
摘要
The growth of electric vehicles (EVs) significantly increases the residential electricity demand and potentially leads to the overload of the transformer in the distribution grid. Aiming to coordinate the charging control of EVs, this article formulates the EVs charging problem as a Markov game with an unknown transition function and proposes a cooperative charging control strategy based on the multiagent deep reinforcement learning. The uncertainties from the dynamic electricity price, non-EV residential load consumption and drivers' individual behaviors are considered to construct the dynamic charging environment. Each agent contains a collective-policy model and an independent learner. The collective-policy model is introduced to model other agent's behaviors by approximating their power consumption. The independent learner is used to learn the optimal charging strategy by interacting with the environment. The soft-actor-critic framework is adopted to train the independent learner, enabling the proposed method to address the continuous state and action. Agents are trained with only the local observation and approximation, indicating that the proposed approach is fully decentralized and scalable to the problem with multiple agents. Finally, several numerical studies constructed based on the real-world data demonstrate the effectiveness and scalability of the proposed approach.
科研通智能强力驱动
Strongly Powered by AbleSci AI