MNIST数据库
计算机科学
人为噪声
上传
噪音(视频)
趋同(经济学)
信息隐私
干扰(通信)
机器学习
人工智能
私人信息检索
深度学习
分布式计算
计算机网络
计算机安全
无线
电信
图像(数学)
操作系统
频道(广播)
经济
经济增长
物理层
作者
Zihao Peng,Boyuan Li,Le Li,Shengbo Chen,Guanghui Wang,Hong Rao,Cong Shen
标识
DOI:10.1109/tccn.2023.3284548
摘要
The data security issue in federated learning is critical. While federated learning allows clients to collaboratively participate in the global model training without sharing private data, external eavesdroppers may intercept the model uploaded by the client to the server, revealing some sensitive information. Noise interference, i.e., adding noise to the client model before transmission, is an effective and efficient privacy-preserving method, but it degrades the learning performance of the system at the same time. In this paper, to address the challenge of system performance degradation caused by noise interference, we propose the FedNoise algorithm, which adopts two separate learning rates at the client and server respectively. By carefully tuning these learning rates, the global model can converge to the optimum. We provide theoretical proofs of the convergence of FedNoise for both strongly convex and non-convex loss functions and conduct simulations on real tasks. Numerical experimental results demonstrate that, under the same privacy protection level, FedNoise significantly outperforms the state-of-art scheme on datasets MNIST, Fashion-MNIST, and CIFAR10.
科研通智能强力驱动
Strongly Powered by AbleSci AI