子网
计算机科学
个性化
修剪
GSM演进的增强数据速率
计算
分布式计算
服务器
机制(生物学)
边缘计算
边缘设备
人工智能
计算机网络
算法
操作系统
云计算
万维网
哲学
认识论
农学
生物
作者
Xiaomao Zhou,Qingmin Jia,Renchao Xie
标识
DOI:10.1145/3495243.3558248
摘要
In this paper, we present NestFL, a learning-efficient FL framework for edge computing, which can jointly improve the training efficiency and achieve personalization. Specifically, NestFL takes the runtime resources of the edge devices into consideration and assigns each device a sparse-structured subnetwork by progressively performing the structured pruning. During training, only the updates of these subnetworks are transmitted to the central server. Additionally, these generated subnetworks adopt a structure- and parameter-sharing mechanism, making themselves nested inside a multi-capacity global model. In doing so, the overall communication and computation costs can be significantly reduced, and each device can learn a personalized model without introducing extra parameters. Furthermore, a weighted aggregation mechanism is designed to improve the training performance and maximally preserve personalization.
科研通智能强力驱动
Strongly Powered by AbleSci AI