经济调度
稳健性(进化)
强化学习
拉格朗日乘数
计算机科学
可再生能源
电力系统
数学优化
交流电源
随机性
控制理论(社会学)
工程类
控制工程
人工智能
功率(物理)
控制(管理)
电气工程
电压
数学
生物化学
物理
化学
统计
量子力学
基因
作者
Xiaoyun Han,Chaoxu Mu,Jun Yan,Zeyuan Niu
标识
DOI:10.1016/j.ijepes.2022.108686
摘要
The large-scale renewable energy integration has brought challenges to energy management in modern power systems. Due to the strong randomness and volatility of renewable energy, traditional model-based methods may become insufficient for optimal active power dispatch. To tackle the challenge, this paper proposes an autonomous control method based on soft actor–critic (SAC), a deep-reinforcement learning (DRL) strategy recently developed, which provides an optimal solution for active power dispatch without a mathematical model while improving the renewable energy consumption rate under stable operation. A Lagrange multiplier is introduced to the SAC (LM-SAC) to promote algorithm performance in optimal active power dispatch. A pre-trained scheme based on imitation learning (IL-SAC) is also designed to further improve the training efficiency and robustness of the DRL agent. Simulations on the IEEE 118-bus system with the open platform Grid2Op verify that the proposed algorithm effectively achieves better renewable energy consumption rate and robustness compared with existing DRL algorithms. • SAC is used in power systems, which realizes real-time optimal dispatch of power. • LM-SAC based on the Lagrange multiplier method is proposed to improve SAC. • A pre-trained scheme based on imitation learning is designed to propose IL-SAC.
科研通智能强力驱动
Strongly Powered by AbleSci AI