计算机科学
修剪
计算
无线
调度(生产过程)
边缘设备
趋同(经济学)
块(置换群论)
分布式计算
移动边缘计算
数学优化
强化学习
GSM演进的增强数据速率
人工智能
算法
电信
数学
操作系统
经济
几何学
生物
云计算
经济增长
农学
作者
Zhixiong Chen,Wenqiang Yi,Hyundong Shin,Arumugam Nallanathan
标识
DOI:10.1109/twc.2023.3342626
摘要
Most existing wireless federated learning (FL) studies focused on homogeneous model settings where devices train identical local models. In this setting, the devices with poor communication and computation capabilities may delay the global model update and degrade the performance of FL. Moreover, in the homogenous model settings, the scale of the global model is restricted by the device with the lowest capability. To tackle these challenges, this work proposes an adaptive model pruning-based FL (AMP-FL) framework, where the edge server dynamically generates sub-models by pruning the global model for devices' local training to adapt their heterogeneous computation capabilities and time-varying channel conditions. Since the involvement of diverse structures of devices' sub-models in the global model updating may negatively affect the training convergence, we propose compensating for the gradients of pruned model regions by devices' historical gradients. We then introduce an age of information (AoI) metric to characterize the staleness of local gradients and theoretically analyze the convergence behaviour of AMP-FL. The convergence bound suggests scheduling devices with large AoI of gradients and pruning the model regions with small AoI for devices to improve the learning performance. Inspired by this, we define a new objective function, i.e., the average AoI of local gradients, to transform the inexplicit global loss minimization problem into a tractable one for device scheduling, model pruning, and resource block (RB) allocation design. Through detailed analysis, we derive the optimal model pruning strategy and transform the RB allocation problem into equivalent linear programming that can be effectively solved. Experimental results demonstrate the effectiveness and superiority of the proposed approaches. The proposed AMP-FL is capable of achieving 1.9x and 1.6x speed up for FL on MNIST and CIFAR-10 datasets in comparison with the FL schemes with homogeneous model settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI