强化学习
计算机科学
模型预测控制
控制(管理)
人工智能
事件(粒子物理)
机器学习
物理
量子力学
摘要
ABSTRACT The increasing scale of industrial systems has imposed higher demands on the accuracy and real‐time performance of control systems. Event‐triggered model predictive control (MPC) improves overall system efficiency by reducing unnecessary control computations; however, it heavily relies on accurate system models and extensive domain knowledge. This article proposes a novel event‐triggered MPC framework based on hierarchical reinforcement learning (HRL), which mitigates the dependency on precise models and prior knowledge through a data‐driven approach. The proposed method decomposes the event‐triggered MPC task into two interconnected components: High‐level triggering decisions and low‐level control input generation. The high‐level policy designs a more generalizable triggering mechanism based on a global reward signal to dynamically determine when control updates should be executed. The low‐level policy, guided by the high‐level decisions and internal rewards, learns an approximate control strategy. This hierarchical structure effectively captures the layered relationship and dynamic interaction between triggering and control, thereby enhancing both the interpretability and training efficiency of the learning policies. Moreover, the paper introduces a stability verification and policy improvement mechanism by constructing Lyapunov functions directly from data. Experimental results on both linear and nonlinear systems demonstrate the proposed method's significant advantages in terms of learning efficiency, control performance, and system stability.
科研通智能强力驱动
Strongly Powered by AbleSci AI