计算机科学
卷积神经网络
推论
图形
人工智能
可扩展性
植绒(纹理)
控制器(灌溉)
分布式计算
机器学习
感知
理论计算机科学
农学
生物
复合材料
神经科学
材料科学
数据库
作者
Ting-Kuei Hu,Fernando Gama,Tianlong Chen,Wenqing Zheng,Zhangyang Wang,Alejandro Ribeiro,Brian M. Sadler
出处
期刊:IEEE Transactions on Signal and Information Processing over Networks
日期:2021-12-31
卷期号:8: 12-24
被引量:12
标识
DOI:10.1109/tsipn.2021.3139336
摘要
In this paper, we present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI). This multi-agent decentralized learning-to-control framework maps raw visual observations to agent actions, aided by local communication among neighboring agents. Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN/GNN), addressing agent-level visual perception and feature learning, as well as swarm-level communication, local information aggregation and agent action inference, respectively. By jointly training the CNN and GNN, image features and communication messages are learned in conjunction to better address the specific task. We use imitation learning to train the VGAI controller in an offline phase, relying on a centralized expert controller. This results in a learned VGAI controller that can be deployed in a distributed manner for online execution. Additionally, the controller exhibits good scaling properties, with training in smaller teams and application in larger teams. Through a multi-agent flocking application, we demonstrate that VGAI yields performance comparable to or better than other decentralized controllers, using only the visual input modality and without accessing precise location or motion state information.
科研通智能强力驱动
Strongly Powered by AbleSci AI