计算机科学
延迟(音频)
架空(工程)
边缘计算
边缘设备
GSM演进的增强数据速率
分布式计算
计算
能源消耗
高效能源利用
趋同(经济学)
计算机工程
人工智能
云计算
算法
电气工程
操作系统
工程类
生物
经济
电信
经济增长
生态学
作者
Peichun Li,Guoliang Cheng,Xumin Huang,Jiawen Kang,Rong Yu,Yuan Wu,Miao Pan
标识
DOI:10.1109/infocom53939.2023.10229017
摘要
In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic communication overhead. An enhanced parameter aggregation is conducted in an element-wise manner to improve the model performance. Focusing on AnycostFL, we further propose an optimization design to minimize the global training loss with personalized latency and energy constraints. By revealing the theoretical insights of the convergence analysis, personalized training strategies are deduced for different devices to match their locally available resources. Experiment results indicate that, when compared to the state-of-the-art efficient FL algorithms, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy. Moreover, the results also demonstrate that, our approach significantly improves the converged global accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI