差别隐私
计算机科学
梯度下降
随机梯度下降算法
噪音(视频)
信息泄露
人工智能
修剪
人为噪声
噪声测量
数据挖掘
算法
机器学习
计算机安全
降噪
人工神经网络
计算机网络
发射机
生物
图像(数学)
频道(广播)
农学
作者
Wüchang Wei,Ling Liu,Jingya Zhou,Ka-Ho Chow,Yanzhao Wu
出处
期刊:IEEE Transactions on Parallel and Distributed Systems
[Institute of Electrical and Electronics Engineers]
日期:2023-07-01
卷期号:34 (7): 2040-2054
被引量:4
标识
DOI:10.1109/tpds.2023.3273490
摘要
This paper presents a holistic approach to gradient leakage resilient distributed Stochastic Gradient Descent (SGD). First , we analyze two types of strategies for privacy-enhanced federated learning: (i) gradient pruning with random selection or low-rank filtering and (ii) gradient perturbation with additive random noise or differential privacy noise. We analyze the inherent limitations of these approaches and their underlying impact on privacy guarantee, model accuracy, and attack resilience. Next , we present a gradient leakage resilient approach to securing distributed SGD in federated learning, with differential privacy controlled noise as the tool. Unlike conventional methods with the per-client federated noise injection and fixed noise parameter strategy, our approach keeps track of the trend of per-example gradient updates. It makes adaptive noise injection closely aligned throughout the federated model training. Finally , we provide an empirical privacy analysis on the privacy guarantee, model utility, and attack resilience of the proposed approach. Extensive evaluation using five benchmark datasets demonstrates that our gradient leakage resilient approach can outperform the state-of-the-art methods with competitive accuracy performance, strong differential privacy guarantee, and high resilience against gradient leakage attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI