强化学习
计算机科学
微电网
可扩展性
背景(考古学)
分布式计算
人工智能
机器学习
控制(管理)
数据库
生物
古生物学
作者
Yujian Ye,Hongru Wang,Peiling Chen,Yi Tang,Goran Štrbac
标识
DOI:10.1109/tsg.2023.3243170
摘要
Microgrids (MG) have recently attracted great interest as an effective solution to the challenging problem of distributed energy resources' management in distribution networks. In this context, despite deep reinforcement learning (DRL) constitutes a well-suited model-free and data-driven methodological framework, its application to MG energy management is still challenging, driven by their limitations on environment status perception and constraint satisfaction. In this paper, the MG energy management problem is formalized as a Constrained Markov Decision Process, and is solved with the state-of-the-art interior-point policy optimization (IPO) method. In contrast to conventional DRL approaches, IPO facilitates efficient learning in multi-dimensional, continuous state and action spaces, while promising satisfaction of complex network constraints of the distribution network. The generalization capability of IPO is further enhanced through the extraction of spatial-temporal correlation features from original MG operating status, combining the strength of edge conditioned convolutional network and long short-term memory network. Case studies based on an IEEE 15-bus and 123-bus test feeders with real-world data demonstrate the superior performance of the proposed method in improving MG cost effectiveness, safeguarding the secure operation of the network and uncertainty adaptability, through performance benchmarking against model-based and DRL-based baseline methods. Finally, case studies also analyze the computational and scalability performance of proposed and baseline methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI