差别隐私
计算机科学
合并(版本控制)
趋同(经济学)
联合学习
分布式学习
差速器(机械装置)
信息隐私
人工智能
算法
计算机安全
情报检索
心理学
教育学
航空航天工程
工程类
经济
经济增长
作者
Yaling Zhang,Dongtai Tang
标识
DOI:10.1109/cis58238.2022.00033
摘要
Federated Learning (FL) is a special distributed machine learning environment. It is jointly trained by many clients under the coordination of a central server. And differential privacy can provide privacy guarantee for FL. While, federated learning, compared with centralized learning, converges at slower speed. And differential privacy exacerbates this trend. In this paper, we propose a novel FL algorithm named DP-FedADMM to solve these problems. We merge differential privacy (DP) into the FedADMM algorithm and propose a method to handle noisy gradients. Under the guidance of the help of the latest results in differential privacy theory, we provide a privacy proof of DP-FedADMM. Through extensive experiments on datasets, it is demonstrated that DP-FedADMM outperforms the currently popular DP-FedAvg algorithm in terms of model convergence speed.
科研通智能强力驱动
Strongly Powered by AbleSci AI