计算机科学
服务器
异步通信
GSM演进的增强数据速率
边缘设备
延迟(音频)
独立同分布随机变量
分布式计算
趋同(经济学)
计算机网络
分布式数据库
人工智能
操作系统
经济
随机变量
统计
云计算
电信
经济增长
数学
作者
Yuchang Sun,Jiawei Shao,Yuyi Mao,Jessie Hui Wang,Jun Zhang
标识
DOI:10.1109/tnsm.2023.3252818
摘要
Federated edge learning (FEEL) emerges as a privacy-preserving paradigm to effectively train deep learning models from the distributed data in 6G networks. Nevertheless, the limited coverage of a single edge server results in an insufficient number of participating client nodes, which may impair the learning performance. In this paper, we investigate a novel FEEL framework, namely semi-decentralized federated edge learning (SD-FEEL), where multiple edge servers collectively coordinate a large number of client nodes. By exploiting the low-latency communication among edge servers for efficient model sharing, SD-FEEL incorporates more training data, while enjoying lower latency compared with conventional federated learning. We detail the training algorithm for SD-FEEL with three steps, including local model update, intra-cluster, and inter-cluster model aggregations. The convergence of this algorithm is proved on non-independent and identically distributed data, which reveals the effects of key parameters and provides design guidelines. Meanwhile, the heterogeneity of edge devices may cause the straggler effect and deteriorate the convergence speed of SD-FEEL. To resolve this issue, we propose an asynchronous training algorithm with a staleness-aware aggregation scheme, of which, the convergence is also analyzed. The simulations demonstrate the effectiveness and efficiency of the proposed algorithms for SD-FEEL and corroborate our analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI