计算机科学
推论
蒸馏
人工智能
机器学习
计算机安全
化学
有机化学
标识
DOI:10.1109/cscwd57460.2023.10152831
摘要
Federated learning, which gathers the model parameters of numerous IoT devices without accessing user data, is used to train high-quality models and enhance the quantity and quality of available data. Federated distillation learning strategies are suggested as a means of addressing the difficulties associated with federated learning in Non-IID environment. However, recent research has demonstrated that federated learning falls short of providing complete privacy protection and is still open to malevolent attackers' inference attacks. In this paper, we investigate a malicious attacker's membership inference attack in a federated distillation learning context. We initially concentrate on federated distillation learning in the Non-IID environment, and then we propose a membership inference attack technique started by a malicious client that can successfully infer the privacy of other clients without getting more information. Comprehensive experimental results show the effectiveness of our MIA-FedDL and quantify user privacy leakage in federated distillation learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI