审计
计算机科学
稳健性(进化)
人工智能
Byzantine容错
分布式学习
机器学习
联合学习
计算机安全
分布式计算
会计
业务
心理学
基因
化学
容错
生物化学
教育学
作者
Zhuangzhuang Zhang,Libing Wu,Debiao He,Jianxin Li,Na Lu,Xuejiang Wei
标识
DOI:10.1109/tsusc.2024.3379440
摘要
Federated Learning (FL), as a distributed machine learning technique, has promise for training models with distributed data in Artificial Intelligence of Things (AIoT). However, FL is vulnerable to Byzantine attacks from diverse participants. While numerous Byzantine-robust FL solutions have been proposed, most of them involve deploying defenses at either the aggregation server or the participant level, significantly impacting the original FL process. Moreover, it will bring extra computational burden to the server or the participant, which is especially unsuitable for the resource-constrained AIoT domain. To resolve the aforementioned concerns, we propose FL-Auditor, a Byzantine-robust FL approach based on public auditing. Its core idea is to use a Third-Party Auditor (TPA) to audit samples from the FL training process, analyzing the trustworthiness of different participants, thereby helping FL obtain a more robust global model. In addition, we also design a lazy update mechanism to reduce the negative impact of sampling audit on the performance of the global model. Extensive experiments have demonstrated the effectiveness of our FL-Auditor in terms of accuracy, robustness against attacks, and flexibility. In particular, compared to the existing method, our FL-Auditor significantly reduces the computation time on the aggregation server by 8×-17×.
科研通智能强力驱动
Strongly Powered by AbleSci AI