计算机科学
联合学习
架空(工程)
修剪
基线(sea)
软件部署
任务(项目管理)
GSM演进的增强数据速率
人工智能
机器学习
火车
数据挖掘
分布式计算
软件工程
操作系统
农学
地图学
地理
管理
经济
地质学
海洋学
生物
作者
Yang Liu,Yi Zhao,Guangmeng Zhou,Ke Xu
出处
期刊:Communications in computer and information science
日期:2021-01-01
卷期号:: 430-437
被引量:4
标识
DOI:10.1007/978-3-030-92307-5_50
摘要
Federated learning (FL) has been widely deployed in edge computing scenarios. However, FL-related technologies are still facing severe challenges while evolving rapidly. Among them, statistical heterogeneity (i.e., non-IID) seriously hinders the wide deployment of FL. In our work, we propose a new framework for communication-efficient and personalized federated learning, namely FedPrune. More specifically, under the newly proposed FL framework, each client trains a converged model locally to obtain critical parameters and substructure that guide the pruning of the network participating FL. FedPrune is able to achieve high accuracy while greatly reducing communication overhead. Moreover, each client learns a personalized model in FedPrune. Experimental results has demonstrated that FedPrune achieves the best accuracy in image recognition task with varying degrees of reduced communication costs compared to the three baseline methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI