Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning

计算机科学 差别隐私 上传 通知 泄漏(经济) 趋同(经济学) 梯度下降 信息泄露 信息隐私 私人信息检索 联合学习 计算机安全 数据挖掘 人工智能 人工神经网络 宏观经济学 政治学 法学 经济 经济增长 操作系统
作者
Jiahui Hu,Zhibo Wang,Shen Yong-sheng,Bohan Lin,Peng Sun,Xiaoyi Pang,Jian Liu,Kui Ren
出处
期刊:IEEE ACM Transactions on Networking [Institute of Electrical and Electronics Engineers]
卷期号:: 1-16
标识
DOI:10.1109/tnet.2023.3317870
摘要

Federated learning (FL) requires frequent uploading and updating of model parameters, which is naturally vulnerable to gradient leakage attacks (GLAs) that reconstruct private training data through gradients. Although some works incorporate differential privacy (DP) into FL to mitigate such privacy issues, their performance is not satisfactory since they did not notice that GLA incurs heterogeneous risks of privacy leakage (RoPL) with respect to gradients from different communication rounds and clients. In this paper, we propose an Adaptive Privacy-Preserving Federated Learning (Adp-PPFL) framework to achieve satisfactory privacy protection against GLA, while ensuring good performance in terms of model accuracy and convergence speed. Specifically, a leakage risk-aware privacy decomposition mechanism is proposed to provide adaptive privacy protection to different communication rounds and clients by dynamically allocating the privacy budget according to the quantified RoPL. In particular, we exploratively design a round-level and a client-level RoPL quantification method to measure the possible risks of GLA breaking privacy from gradients in different communication rounds and clients respectively, which only employ the limited information in general FL settings. Furthermore, to improve the FL model training performance (i.e., convergence speed and global model accuracy), we propose an adaptive privacy-preserving local training mechanism that dynamically clips the gradients and decays the noises added to the clipped gradients during the local training process. Extensive experiments show that our framework outperforms the existing differentially private FL schemes on model accuracy, convergence, and attack resistance.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
刚刚
2秒前
爆米花应助湫白白采纳,获得10
2秒前
meihui完成签到 ,获得积分10
2秒前
2秒前
2秒前
4秒前
5秒前
whisper发布了新的文献求助10
5秒前
天天快乐应助forever采纳,获得20
6秒前
6秒前
7秒前
rz发布了新的文献求助10
7秒前
婷123发布了新的文献求助10
9秒前
Hua发布了新的文献求助10
9秒前
regina发布了新的文献求助10
10秒前
12秒前
13秒前
爱因斯坦发布了新的文献求助10
13秒前
14秒前
15秒前
美丽的谷芹完成签到,获得积分10
15秒前
木偶人完成签到,获得积分10
16秒前
阿斯顿发布了新的文献求助30
16秒前
17秒前
悦耳藏今发布了新的文献求助10
18秒前
充电宝应助Hua采纳,获得10
19秒前
研友_VZG7GZ应助小玲仔采纳,获得10
19秒前
脑洞疼应助爱因斯坦采纳,获得10
20秒前
孙淼发布了新的文献求助10
21秒前
毓雅完成签到,获得积分10
21秒前
领导范儿应助科研通管家采纳,获得10
22秒前
李健应助乡乡采纳,获得10
23秒前
科目三应助科研通管家采纳,获得10
23秒前
23秒前
23秒前
斯文败类应助科研通管家采纳,获得10
23秒前
23秒前
26秒前
英姑应助蒋宜颖采纳,获得10
29秒前
高分求助中
【本贴是提醒信息,请勿应助】请在求助之前详细阅读求助说明!!!! 20000
One Man Talking: Selected Essays of Shao Xunmei, 1929–1939 1000
The Three Stars Each: The Astrolabes and Related Texts 900
Yuwu Song, Biographical Dictionary of the People's Republic of China 800
Multifunctional Agriculture, A New Paradigm for European Agriculture and Rural Development 600
Bernd Ziesemer - Maos deutscher Topagent: Wie China die Bundesrepublik eroberte 500
A radiographic standard of reference for the growing knee 400
热门求助领域 (近24小时)
化学 材料科学 医学 生物 有机化学 工程类 生物化学 纳米技术 物理 内科学 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 电极 光电子学 量子力学
热门帖子
关注 科研通微信公众号,转发送积分 2480278
求助须知:如何正确求助?哪些是违规求助? 2142806
关于积分的说明 5464309
捐赠科研通 1865586
什么是DOI,文献DOI怎么找? 927427
版权声明 562931
科研通“疑难数据库(出版商)”最低求助积分说明 496183