计算机科学
计算机安全
趋同(经济学)
相似性(几何)
GSM演进的增强数据速率
正弦
模型攻击
人工智能
几何学
数学
经济
图像(数学)
经济增长
作者
Harsh Kasyap,Somanath Tripathy
标识
DOI:10.1109/tdsc.2024.3353317
摘要
Federated learning is a collaborative learning paradigm that brings the model to the edge for training over the participants' local data under the orchestration of a trusted server. Though this paradigm protects data privacy, the aggregator has no control over the local data or model at the edge. So, malicious participants could perturb their locally held data or model to post an insidious update, degrading global model accuracy. Recent Byzantine-robust aggregation rules could defend against data poisoning attacks. Also, model poisoning attacks have become more ingenious and adaptive to the existing defenses. But these attacks are crafted against specific aggregation rules. This work presents a generic model poisoning attack framework named Sine (Similarity is not enough), which harnesses vulnerabilities in cosine similarity to increase the impact of poisoning attacks by 20-30%. Sine makes convergence unachievable by maintaining the persistence of the attack. Further, we propose an effective defense technique called FLTC (FL Trusted Coordinates) to avoid such issues. FLTC selects the trusted coordinates and aggregates them based on the change in their direction and magnitude with respect to a trusted base model update. FLTC could successfully defend against poisoning attacks, including adaptive model poisoning attacks, by restricting the attack impact to 2-4%.
科研通智能强力驱动
Strongly Powered by AbleSci AI