后门
计算机科学
适应性
稳健性(进化)
人工智能
机器学习
加权
欧几里德距离
公制(单位)
数据挖掘
计算机安全
工程类
基因
生物
生态学
放射科
医学
生物化学
化学
运营管理
作者
Siquan Huang,Yijiang Li,Chong Chen,Ying Gao,Xiping Hu
标识
DOI:10.1109/tpami.2025.3581555
摘要
Federated learning (FL), recognized for its decentralized and privacy-preserving nature, faces vulnerabilities to backdoor attacks that aim to manipulate the model's behavior on attacker-chosen inputs. Most existing defenses based on statistical differences take effect only against specific attacks. This limitation becomes significantly pronounced when malicious gradients closely resemble benign ones or the data exhibits non-IID characteristics, making the defenses ineffective against stealthy attacks. This paper revisits distance-based defense methods and uncovers two critical insights: First, Euclidean distance becomes meaningless in high dimensions. Second, a single metric cannot identify malicious gradients with diverse characteristics. As a remedy, we propose FedID, a simple yet effective strategy employing multiple metrics with dynamic weighting for adaptive backdoor detection. Besides, we present a modified z-score approach to select the gradients for aggregation. Notably, FedID does not rely on predefined assumptions about attack settings or data distributions and minimally impacts benign performance. We conduct extensive experiments on various datasets and attack scenarios to assess its effectiveness. FedID consistently outperforms previous defenses, particularly excelling in challenging Edge-case PGD scenarios. Our experiments highlight its robustness against adaptive attacks tailored to break the proposed defense and adaptability to a wide range of non-IID data distributions without compromising benign performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI