计算机科学
上传
量化(信号处理)
架空(工程)
云计算
无线
分布式计算
计算机网络
联合学习
无线网络
实时计算
算法
电信
操作系统
作者
Shuaiqi Shen,Chong Yu,Kuan Zhang,Xi Chen,Huimin Chen,Song Ci
标识
DOI:10.1109/iwcmc51323.2021.9498677
摘要
With the upcoming next generation wireless network, vehicles are expected to be empowered by artificial intelligence (AI). By connecting vehicles and cloud server via wireless communication, federated learning (FL) allows vehicles to collaboratively train deep learning models to support intelligent services, such as autonomous driving. However, the large number of vehicles and increasing size of model parameters bring challenges to FL-empowered connected vehicles. Since communication bandwidth is insufficient to upload full-precision local models from numerous vehicles, model compression is usually conducted to reduce transmitted data size. Nevertheless, conventional model compression methods may not be practical for resource-constrained vehicles due to the increasing computational overhead for FL training. The overhead for downloading global model can also be omitted by existing methods since they are originally designed for centralized learning instead of FL. In this paper, we propose a ternary quantization based model compression method on communication-efficient FL for resource-constrained connected vehicles. Specifically, we firstly propose a ternary quantization based local model training algorithm that optimizes quantization factors and parameters simultaneously. Then, we design a communication-efficient FL approach that reduces overhead for both upstream and downstream communications. Finally, simulation results validate that the proposed method demands the lowest communication and computational overheads for FL training, while maintaining desired model accuracy compared to existing model compression methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI