计算机科学
异步通信
GSM演进的增强数据速率
符号
强化学习
边缘计算
人工智能
边缘设备
资源(消歧)
机器学习
资源配置
分布式计算
理论计算机科学
数学
操作系统
算术
云计算
计算机网络
作者
Jianchun Liu,Hongli Xu,Lun Wang,Yang Xu,Chen Qian,Jinyang Huang,He Huang
标识
DOI:10.1109/tmc.2021.3096846
摘要
Federated learning (FL) has been widely adopted to train machine learning models over massive data in edge computing. However, machine learning faces critical challenges, e.g., data imbalance, edge dynamics, and resource constraints, in edge computing. The existing FL solutions cannot well cope with data imbalance or edge dynamics, and may cause high resource cost. In this paper, we propose an adaptive asynchronous federated learning (AAFL) mechanism. To deal with edge dynamics, a certain fraction $\alpha$ of all local updates will be aggregated by their arrival order at the parameter server in each epoch. Moreover, the system can intelligently vary the number of local updated models for global model aggregation in different epochs with network situations. We then propose experience-driven algorithms based on deep reinforcement learning (DRL) to adaptively determine the optimal value of $\alpha$ in each epoch for two cases of AAFL, single learning task and multiple learning tasks, so as to achieve less completion time of training under resource constraints. Extensive experiments on the classical models and datasets show high effectiveness of the proposed algorithms. Specifically, AAFL can reduce the completion time by about 70 percent and improve the learning accuracy by about 28 percent under resource constraints, compared with the state-of-the-art solutions.
科研通智能强力驱动
Strongly Powered by AbleSci AI