计算机科学
推论
蒸馏
人工智能
计算机安全
色谱法
化学
作者
Zilu Yang,Yanchao Zhao,Jiale Zhang
标识
DOI:10.1007/978-3-031-25201-3_28
摘要
AbstractWith the enhanced sensing and computing power of mobile devices, emerging technologies and applications generate massive amounts of data at the edge network, assigning new requirements for improving the privacy and security. Federated learning signifies the advancement in protecting the privacy of intelligent IoT applications. However, recent researches reveal the inherent vulnerabilities of the federated learning that the adversary could obtain the private training data from the publicly shared gradients. Although model distillation is considered a state-of-the-art technique to solve the gradient leakage by hiding gradients, our experiments show that it still has the risk of membership information leakage at the client level. In this paper, we propose a novel client-based membership inference attack in federated distillation learning. Specifically, we first comprehensively analyze model distillation in defensive capabilities for deep gradient leakage. Then, by exploiting that the adversary can learn other participants’ model structure and behavior during federated distillation, we design membership inference attacks against other participants based on private models of malicious clients. Experimental results demonstrate the superiority of our proposed method in terms of efficiency in resolving deep leakage of gradients and high accuracy of membership inference attack in federated distillation learning.KeywordsFederated learningModel distillationMembership inference attack
科研通智能强力驱动
Strongly Powered by AbleSci AI