新闻聚合器
计算机科学
效率低下
拜占庭式建筑
滤波器(信号处理)
联合学习
人工智能
分布式计算
机器学习
地理
万维网
计算机视觉
考古
经济
微观经济学
作者
Liyue Shen,Yanjun Zhang,Jingwei Wang,Guangdong Bai
标识
DOI:10.1145/3564625.3564658
摘要
Manipulation of local training data and local updates, i.e., the Byzantine poisoning attack, is the main threat arising from the collaborative nature of the federated learning (FL) paradigm. Many Byzantine-robust aggregation algorithms (AGRs) have been proposed to filter out or moderate suspicious local updates uploaded by Byzantine participants at the central aggregator. However, they largely suffer from model quality degradation due to the over-removal of local updates or/and the inefficiency caused by the expensive analysis of the high-dimensional local updates.
科研通智能强力驱动
Strongly Powered by AbleSci AI