模型预测控制
控制器(灌溉)
计算机科学
倒立摆
控制理论(社会学)
树(集合论)
构造(python库)
光学(聚焦)
弹道
状态空间
功能(生物学)
钟摆
任务(项目管理)
人工智能
控制(管理)
工程类
数学
系统工程
程序设计语言
光学
非线性系统
数学分析
物理
天文
统计
生物
机械工程
进化生物学
量子力学
农学
作者
Ioanna Mitsioni,Pouria Tajvar,Danica Kragić,Jana Tůmová,Christian Pek
标识
DOI:10.1109/tro.2023.3266995
摘要
In this article, we address the task and safety performance of data-driven model predictive controllers (DD-MPC) for systems with complex dynamics, i.e., temporally or spatially varying dynamics that may also be discontinuous. The three challenges we focus on are the accuracy of learned models, the receding horizon-induced myopic predictions of DD-MPC, and the active encouragement of safety. To learn accurate models for DD-MPC, we cautiously, yet effectively, explore the dynamical system with rapidly exploring random trees (RRT) to collect a uniform distribution of samples in the state-input space and overcome the common distribution shift in model learning. The learned model is further used to construct an RRT tree that estimates how close the model's predictions are to the desired target. This information is used in the cost function of the DD-MPC to minimize the short-sighted effect of its receding horizon nature. To promote safety, we approximate sets of safe states using demonstrations of exclusively safe trajectories, i.e., without unsafe examples, and encourage the controller to generate trajectories close to the sets. As a running example, we use a broken version of an inverted pendulum where the friction abruptly changes in certain regions. Furthermore, we showcase the adaptation of our method to a real-world robotic application with complex dynamics: robotic food-cutting. Our results show that our proposed control framework effectively avoids unsafe states with higher success rates than baseline controllers that employ models from controlled demonstrations and even random actions.
科研通智能强力驱动
Strongly Powered by AbleSci AI