计算机科学
服务器
瓶颈
计算机网络
带宽(计算)
无线
移动设备
供应
分布式计算
电信
操作系统
嵌入式系统
标识
DOI:10.1109/vtc2023-spring57618.2023.10199823
摘要
While federated learning is widely studied as a distributed machine learning technique that effectively uses distributed datasets for training, there are still a number of challenges. In particular, for datasets that reside on mobile devices (or mobile phone users), the limited bandwidth becomes a bottleneck as the number of devices increases, because the local gradient vectors are be transmitted over the scarce wireless channel. Thus, the notion of over-the-air (OTA) computation has been considered to avoid bandwidth expansion due to the increase in devices by exploiting the superposition property of radio channels. In this paper, we consider a generalization of OTA to the case of multiple servers, where each server has own model. Using co-phase OTA aggregation, it is shown that a shared channel can be used for all different servers/models. We also propose a digital OTA approach for multi-server federated learning with randomized transmissions.
科研通智能强力驱动
Strongly Powered by AbleSci AI