瓶颈
可扩展性
计算机科学
联合学习
边缘设备
分布式计算
GSM演进的增强数据速率
建筑
分布式学习
特征(语言学)
服务器
边缘计算
计算机网络
人工智能
操作系统
嵌入式系统
云计算
艺术
心理学
教育学
语言学
哲学
视觉艺术
作者
Chuang Hu,Huang Huang Liang,Xiao Han,Bo An Liu,Da Zhao Cheng,Dan Wang
标识
DOI:10.1145/3545008.3545030
摘要
Federated learning (FL) is a new distributed machine learning paradigm that enables machine learning on edge devices. One unique feature of FL is that edge devices belong to individuals; and since they are not "owned" by the FL coordinator, but can be "federated" instead, there can potentially be a huge number of edge devices. In the current distributed ML architecture, the parameter server (PS) architecture, model aggregation is centralized. When facing a large number of edge devices, the centralized model aggregation becomes the bottleneck and fundamentally restricts system scalability.
科研通智能强力驱动
Strongly Powered by AbleSci AI