计算机科学
云计算
量化(信号处理)
服务器
边缘设备
分布式计算
上传
杠杆(统计)
利用
GSM演进的增强数据速率
计算机网络
理论计算机科学
机器学习
算法
人工智能
计算机安全
操作系统
作者
Lumin Liu,Jun Zhang,Shenghui Song,Khaled B. Letaief
标识
DOI:10.1109/twc.2022.3190512
摘要
Federated learning (FL) is a powerful distributed machine learning framework where a server aggregates models trained by different clients without accessing their private data. Hierarchical FL, with a client-edge-cloud aggregation hierarchy, can effectively leverage both the cloud server's access to many clients' data and the edge servers' closeness to the clients to achieve a high communication efficiency. Neural network quantization can further reduce the communication overhead during model uploading. To fully exploit the advantages of hierarchical FL, an accurate convergence analysis with respect to the key system parameters is needed. Unfortunately, existing analysis is loose and does not consider model quantization. In this paper, we derive a tighter convergence bound for hierarchical FL with quantization. The convergence result leads to practical guidelines for important design problems such as the client-edge aggregation and edge-client association strategies. Based on the obtained analytical results, we optimize the two aggregation intervals and show that the client-edge aggregation interval should slowly decay while the edge-cloud aggregation interval needs to adapt to the ratio of the client-edge and edge-cloud propagation delay. Simulation results shall verify the design guidelines and demonstrate the effectiveness of the proposed aggregation strategy.
科研通智能强力驱动
Strongly Powered by AbleSci AI