计算机科学
上传
联合学习
分拆(数论)
原始数据
服务器
数据建模
机器学习
人工智能
计算机安全
任务(项目管理)
数据库
计算机网络
操作系统
工程类
数学
系统工程
组合数学
程序设计语言
作者
Siman Huang,Yan Bai,Zehua Wang,Peng Liu
标识
DOI:10.1109/icccr54399.2022.9790094
摘要
Federated learning is an emerging, distributed machine learning methodology in which participants submit parameters after using their own local data for training and the server aggregates model parameters submitted by users to form a federated learning model. By sending the local learning results rather than sending out the raw data to the server, the user cannot see how the global model is generated. This opacity protects user privacy. However, since the raw data is never sent to the server, its quality and integrity cannot be guaranteed. Malicious attackers can inject poisoning attacks into their own data or the uploading parameters. Thus, federated learning is very vulnerable to distributed poisoning attacks, and this causes the model performances to be reduced significantly. Designing a robust federated learning is a challenging task. In this paper, we propose an isolated forest-based federated learning defense model as IFFed to eliminate the poisoned model parameters of attackers. In each iteration of federated learning, we use isolated forests to partition the data space of model parameters and calculate the probability of anomaly for each participant. Further, we use auxiliary data to design dynamic thresholds to exclude attackers and minimize the impact of the defense model on benign users. Experiments show that IFFed can automatically defend against poisoning attacks, and the performance of the defending model can be significantly improved by adjusting the pre-trained auxiliary model to ensure proper training of the global model.
科研通智能强力驱动
Strongly Powered by AbleSci AI