计算机科学
修剪
差别隐私
过程(计算)
反演(地质)
信息敏感性
人工智能
数据挖掘
计算机安全
机器学习
农学
生物
构造盆地
操作系统
古生物学
作者
Zhiqiu Zhang,Tianqing Zhu,Wei Ren,Ping Xiong,Kim-Kwang RaymondChoo
标识
DOI:10.1016/j.cose.2022.103039
摘要
In federated learning, the server trains a global learning model based on gradient information shared by multiple clients; thus, protecting client data privacy. However, it has been shown that training data can be reconstructed from the shared gradients, which can result in serious privacy breaches (e.g., also known as gradient-based inversion attacks). Popular privacy-preserving methods include those that are perturbation-related, such as differential privacy. However, these methods can result in high utility loss. In this paper, we reveal that large magnitude gradients play an important role in the image reconstructing process, and thus propose two pruning based defense mechanisms (i.e., SLGP and RLGP) for different model architectures. As only very few gradients have been affected, the utility can be maintained. To demonstrate efficiency, We evaluate the impact of our mechanisms on preventing the reconstruction of input images on various model architectures and datasets using state-of-the-art attack methods. The reconstructed images obtained using the gradient processed by our method are unrecognizable while maintaining the original performance of the models.
科研通智能强力驱动
Strongly Powered by AbleSci AI