后门
计算机科学
MNIST数据库
洗牌
聚类分析
上传
稳健性(进化)
信息隐私
人工智能
计算机安全
机器学习
数据挖掘
深度学习
生物化学
基因
操作系统
化学
程序设计语言
作者
Zekai Chen,Shengxing Yu,Mingyuan Fan,Ximeng Liu,Robert H. Deng
标识
DOI:10.1109/tifs.2023.3326983
摘要
Federated learning (FL) allows multiple clients to train deep learning models collaboratively while protecting sensitive local datasets. However, FL has been highly susceptible to security for federated backdoor attacks (FBA) through injecting triggers and privacy for potential data leakage from uploaded models in practical application scenarios. FBA defense strategies consider specific and limited attacker models, and a sufficient amount of noise injected can only mitigate rather than eliminate the attack. To address these deficiencies, we introduce a Robust Federated Backdoor Defense Scheme (RFBDS) and Privacy-preserving RFBDS (PrivRFBDS) to ensure the elimination of adversarial backdoors. Our RFBDS to overcome FBA consists of amplified magnitude sparsification, adaptive OPTICS clustering, and adaptive clipping. The experimental evaluation of RFBDS is conducted on three benchmark datasets and an extensive comparison is made with state-of-the-art studies. The results demonstrate the promising defense performance from RFBDS, moderately improved by 31.75% ~ 73.75% in clustering defense methods, and 0.03% ~ 56.90% for Non-IID to the utmost extent for the average FBA success rate over MNIST, FMNIST, and CIFAR10. Besides, our privacy-preserving shuffling in PrivRFBDS maintains is $7.83e^{-5}\,\,\sim \,\,0.42\times $ that of state-of-the-art works.
科研通智能强力驱动
Strongly Powered by AbleSci AI