联合学习
计算机科学
人工智能
建筑
机器学习
学习迁移
蒸馏
艺术
视觉艺术
有机化学
化学
作者
Fangchao Yu,Lina Wang,Bo Zeng,Kai Zhao,Rongwei Yu
出处
期刊:Neurocomputing
[Elsevier BV]
日期:2024-01-17
卷期号:575: 127290-127290
被引量:4
标识
DOI:10.1016/j.neucom.2024.127290
摘要
Federated learning is a distributed learning framework in which all participants jointly train a global model to ensure data privacy. In the existing federated learning framework, all clients share the same global model and cannot customize the model architecture according to their needs. In this paper, we propose FLKD (federated learning with knowledge distillation), a personalized and privacy-enhanced federated learning framework. The global model will serve as a medium for knowledge transfer in FLKD, and the client can customize the local model while training with the global model by mutual learning. Furthermore, the participation of the heterogeneous local models changes the training strategy of the global model, which means that FLKD has a natural immune effect against gradient leakage attacks. We conduct extensive empirical experiments to support the training and evaluation of our framework. Results of experiments show that FLKD provides an effective way to solve the problem of model heterogeneity and can effectively defend against gradient leakage attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI