计算机科学
水准点(测量)
MNIST数据库
人工智能
再培训
方案(数学)
深度学习
数学
法学
数学分析
大地测量学
地理
政治学
作者
Xintong Guo,Pengfei Wang,Sen Qiu,Wei Song,Qiang Zhang,Xiaopeng Wei,Dongsheng Zhou
标识
DOI:10.1109/tnse.2023.3343117
摘要
The emergence of the right to be forgotten has sparked interest in federated unlearning. Researchers utilize federated unlearning to address the issue of removing user contributions. However, practical implementation is challenging when malicious clients are involved. In this paper, we propose FAST, i.e., Adopting F ederated unle A rning to Eliminating Maliciou S T erminals at Server Side, a framework with three main components: 1) eliminating contributions of malicious clients: the central server records and subtracts updates of malicious clients from the global model, 2) judging unlearning efficiency: we model a mechanism to assess unlearning efficiency and prevent over-unlearning, and 3) remedying unlearning model performance: the central server utilizes a benchmark dataset to remedy model bias during unlearning. Experimental results demonstrate that FAST can achieve 96.98% accuracy on the MNIST dataset with 40% malicious clients, offering a 16x speedup to retraining from scratch. Meanwhile, it can recover model utility with high efficiency, and extensive evaluations of four real-world datasets demonstrate the validity of our proposed scheme.
科研通智能强力驱动
Strongly Powered by AbleSci AI