计算机科学
稳健性(进化)
无线
MNIST数据库
分布式学习
机器学习
边缘设备
联合学习
分布式计算
边缘计算
共享资源
计算
噪音(视频)
服务器
无线网络
背景(考古学)
人工智能
人为噪声
GSM演进的增强数据速率
计算机网络
深度学习
算法
云计算
电信
图像(数学)
操作系统
古生物学
基因
化学
生物
生物化学
教育学
物理层
心理学
作者
Zubair Shaban,Ranjitha Prasad
标识
DOI:10.1145/3632410.3632496
摘要
To harness the benefits of machine learning (ML), users often face the challenge of sharing their private data with a central entity for model training. However, data sharing can be impractical due to privacy concerns, data size, wireless resource limitations, and other factors. Federated learning (FL) offers an efficient solution. In this approach, edge devices, users, or clients independently train machine learning models locally and iteratively share their model parameters with a central entity or server. The server aggregates these parameters into a global model, which is then distributed to all clients for the next round of training. Since the clients communicate with the server via wireless channels, over-the-air (OTA) computations have emerged as the optimal solution in the context of FL to resolve the challenge of aggregating local models. We investigate the influence of intrinsic noise introduced during OTA computations, focusing on the detrimental effects of impulsive noise on OTA-FL performance. Through a combination of theoretical analysis and experimental validation, we quantify the adverse impact of impulsive noise on convergence. We also introduce an algorithm designed to mitigate these effects. Our empirical results, obtained using CIFAR-10 and MNIST datasets, illustrate both the impact of impulsive noise on OTA-FL and the efficacy of our proposed solution.
科研通智能强力驱动
Strongly Powered by AbleSci AI