计算机科学
量化(信号处理)
数学优化
卡鲁什-库恩-塔克条件
凸优化
无线
上下界
Boosting(机器学习)
算法
正多边形
人工智能
数学
电信
几何学
数学分析
作者
Muhang Lan,Qing Ling,Song Xiao,Wenyi Zhang
标识
DOI:10.1109/twc.2023.3262350
摘要
Federated learning (FL) enables multiple clients to collaborate on a common learning task via only exchanging model updates. With the progressive improvements in deep learning models, communication is becoming a primary bottleneck of FL. Quantization of model updates before transmitting is an effective technique to reduce communication overhead. Most prior literature assumes lossless transmission, but in practice, quantized model updates are distorted by wireless channels due to the variation of client locations. Therefore, this paper focuses on analysis and design of personalized model update quantization with explicitly incorporating channel diversity in wireless FL. We present a novel convergence analysis of quantized FL, which encompasses full and partial client participation, single and multiple local training iterations, and convex and non-convex loss functions. This analysis explicitly embodies the impact of personalized quantization error, channel diversity and model aggregation in FL, and also elucidates their tradeoff on tightening a convergence rate upper bound. An optimization framework, which seeks an optimal allocation scheme given a total budget of quantization bits, is proposed by minimizing an upper bound with respect to channel quality. A nearly optimal solution is derived for this non-convex integer programming problem via analytically solving Karush–Kuhn–Tucker (KKT) optimality conditions and linear search. From a perspective of outlier detection, this channel-aware allocation scheme is also extended to robust model aggregation against client dropouts. Comprehensive numerical evaluation demonstrates the performance enhancement of the proposed scheme over the vanilla allocation scheme with equal quantization bits, particularly in terms of training stability, test accuracy, and robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI