差别隐私
封面(代数)
计算机安全
差速器(机械装置)
互联网隐私
计算机科学
业务
数据挖掘
工程类
机械工程
航空航天工程
作者
Haibin Zheng,Jinyin Chen,Tao Liu,Yao Cheng,Zhao Wang,Yun Wang,Lan Gao,Shouling Ji,Xuhong Zhang
摘要
Federated learning (FL) enables resource-constrained node devices to learn a shared model while keeping the training data local. Since recent research has demonstrated multiple privacy leakage attacks in FL, e.g., gradient inference attacks and membership inference attacks, differential privacy (DP) is applied to serve as one of the most effective privacy protection mechanisms. Despite the benefit DP brings, we observe that the introduction of DP also brings random changes to client updates, which will affect the robust aggregation algorithms. We reveal a novel poisoning attack under the cover of DP, named the DP-Poison attack in FL. Specifically, the DP-Poison attack is designed to achieve four goals: 1) maintaining the main task performance; 2) launching a successful attack; 3) escaping the robust aggregation algorithms in FL; and 4) keeping the effectiveness of DP privacy protection. To achieve these goals, we design multiple optimization goals to generate DP noise through a genetic algorithm. The optimization ensures that while the benign updates change randomly, the malicious updates can change towards the global model after adding the DP noise, so that it is easier to be accepted by the robust aggregation algorithms. Extensive experiments show that DP-Poison achieves a nearly 100% attack success rate while maintaining the proposed four goals.
科研通智能强力驱动
Strongly Powered by AbleSci AI