趋同(经济学)
人工神经网络
计算机科学
加速
计算
非线性系统
李雅普诺夫函数
Lyapunov稳定性
数学优化
理论(学习稳定性)
共识
平衡点
控制理论(社会学)
多智能体系统
算法
数学
人工智能
机器学习
操作系统
量子力学
物理
经济增长
经济
控制(管理)
作者
Xiao Lin,Lei Jia,Jianhua Dai,Yingkun Cao,Yiwei Li,Quanxin Zhu,Jichun Li,Min Liu
出处
期刊:IEEE transactions on neural networks and learning systems
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:: 1-10
标识
DOI:10.1109/tnnls.2022.3193429
摘要
In this article, a novel distributed gradient neural network (DGNN) with predefined-time convergence (PTC) is proposed to solve consensus problems widely existing in multiagent systems (MASs). Compared with previous gradient neural networks (GNNs) for optimization and computation, the proposed DGNN model works in a nonfully connected way, in which each neuron only needs the information of neighbor neurons to converge to the equilibrium point. The convergence and asymptotic stability of the DGNN model are proved according to the Lyapunov theory. In addition, based on a relatively loose condition, three novel nonlinear activation functions are designed to speedup the DGNN model to PTC, which is proved by rigorous theory. Computer numerical results further verify the effectiveness, especially the PTC, of the proposed nonlinearly activated DGNN model to solve various consensus problems of MASs. Finally, a practical case of the directional consensus is presented to show the feasibility of the DGNN model and a corresponding connectivity-testing example is given to verify the influence on the convergence speed.
科研通智能强力驱动
Strongly Powered by AbleSci AI