计算机科学
推论
图形
利用
卷积神经网络
梯度下降
网络拓扑
理论计算机科学
集合(抽象数据类型)
最优化问题
趋同(经济学)
训练集
计算
人工智能
机器学习
数学优化
人工神经网络
算法
数学
经济
操作系统
程序设计语言
经济增长
计算机安全
作者
Simone Scardapane,Indro Spinelli,Paolo Di Lorenzo
出处
期刊:IEEE Transactions on Signal and Information Processing over Networks
日期:2020-12-22
卷期号:7: 87-100
被引量:11
标识
DOI:10.1109/tsipn.2020.3046237
摘要
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.
科研通智能强力驱动
Strongly Powered by AbleSci AI