反推
控制理论(社会学)
李雅普诺夫函数
参数统计
趋同(经济学)
数学优化
非线性系统
自适应控制
最优化问题
数学证明
计算机科学
数学
控制器(灌溉)
多智能体系统
理论(学习稳定性)
严格反馈表
自适应优化
控制(管理)
人工智能
统计
物理
几何学
量子力学
机器学习
农学
经济
生物
经济增长
操作系统
作者
Zhengyan Qin,Tengfei Liu,Zhong‐Ping Jiang
出处
期刊:Automatica
[Elsevier]
日期:2022-07-01
卷期号:141: 110304-110304
被引量:6
标识
DOI:10.1016/j.automatica.2022.110304
摘要
This paper presents an adaptive backstepping approach to distributed optimization for a class of nonlinear multi-agent systems with each agent represented by the parametric strict-feedback form. In particular, this paper does not assume known gradient functions of the local objective functions, and uses the measured gradient values depending on the agents’ real-time outputs instead. A stepwise method is presented to derive novel distributed adaptive optimization algorithms that steer the outputs of all the agents to the optimal solution of the total objective function. First, a novel distributed adaptive optimization algorithm is developed for first-order nonlinear uncertain multi-agent systems, supported by stability analysis and convergence proofs using Lyapunov arguments. Second, by means of Lyapunov arguments in the spirit of backstepping, a distributed adaptive optimization algorithm is presented for high-order strict-feedback systems with parametric uncertainty . Interesting extensions of the main result to practically important classes of systems with unknown virtual control coefficients , output feedback, and relative-measurement feedback are also discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI