强化学习
计算机科学
控制器(灌溉)
先验与后验
适应(眼睛)
网格
过程(计算)
电力系统
控制工程
鉴定(生物学)
控制(管理)
功率(物理)
人工智能
工程类
操作系统
哲学
物理
光学
几何学
认识论
生物
量子力学
植物
数学
农学
作者
Daniel Weber,Maximilian Schenke,Oliver Wallscheid
标识
DOI:10.1109/fes57669.2023.10182718
摘要
Data-driven approaches such as reinforcement learning (RL) allow a controller design without a priori system knowledge with minimal human effort as well as seamless self-adaptation to varying system characteristics. However, RL does not inherently consider input and state constraints, i.e., satisfying safety-relevant system limits during training and test. This is challenging in power electronic systems where it is necessary to avoid overcurrents and overvoltages. To overcome this issue, a standard RL algorithm is extended by a combination of constrained optimal control and online model identification to ensure safety during and after the learning process. In an exemplary three-level voltage source inverter for islanded electrical power grid application, it is shown that the approach does not only significantly improves safety but also improves the overall learning-based control performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI