PID控制器
强化学习
稳健性(进化)
计算机科学
控制理论(社会学)
水准点(测量)
控制工程
概率逻辑
过程(计算)
鲁棒控制
控制系统
人工智能
工程类
控制(管理)
温度控制
生物化学
化学
电气工程
大地测量学
基因
地理
操作系统
作者
Hozefa Jesawada,Amol Yerudkar,Carmen Del Vecchio,Navdeep Singh
标识
DOI:10.1109/cdc51059.2022.9993381
摘要
Proportional-Integral-Derivative (PID) controller is widely used across various industrial process control applications because of its straightforward implementation. However, it can be challenging to fine-tune the PID parameters in practice to achieve robust performance. The paper proposes a model-based reinforcement learning (RL) framework to tune PID controllers leveraging the probabilistic inference for learning control (PILCO) method. In particular, an optimal policy given by PILCO is transformed into a set of robust PID tuning parameters for underactuated mechanical systems. The robustness of the devised controller is verified with simulation studies for a benchmark cart-pole system under server disturbances and system parameter uncertainties.
科研通智能强力驱动
Strongly Powered by AbleSci AI