强化学习
计算机科学
网格
分布式发电
灵活性(工程)
交流电源
电力系统
数学优化
电压
控制理论(社会学)
分布式计算
工程类
可再生能源
控制(管理)
功率(物理)
人工智能
物理
几何学
电气工程
统计
量子力学
数学
作者
Ruoheng Wang,Siqi Bu,C. Y. Chung
标识
DOI:10.1109/tsg.2023.3302155
摘要
The increasing scale of distributed energy resources (DERs) in the active distribution network (ADN) offers valuable opportunities for distribution system operators (DSOs) to assist transmission system operators (TSOs) in regulating operational issues and reducing overall operational costs. The paper proposes a multi-agent deep reinforcement learning (MADRL)-based TSO-DSO coordination framework for jointly regulating the frequency and voltage measured at the grid supply point (GSP) of the transmission network (TN) with a sufficient utilization of control flexibility of converter-based DERs. To facilitate MADRL, a simple grid partitioning method is employed for the balanced partitioning of ADNs with the connectivity constraint imposed on each sub-region. On this basis, a MADRL algorithm is designed by combining the QMIX and twin delayed deep deterministic policy (TD3) algorithm for an effective optimization of the decentralized regulation scheme. The proposed QMIX-TD3 is equipped with the graph convolutional network (GCN) for its temporal-spatial learning ability to tackle the challenges of the complicated transmission and distribution (T/D) system dynamics. In addition, a policy smooth regularization (PSR) loss is proposed to damp action oscillations and enhance sample efficiency. Experiments on the integrated T/D system demonstrate that the proposed framework can effectively mitigate the impact of system disturbances and therefore benefit the system operation.
科研通智能强力驱动
Strongly Powered by AbleSci AI