计算机科学
联合学习
独立同分布随机变量
人工智能
人工神经网络
学习迁移
图层(电子)
机器学习
数据建模
协作学习
特征学习
代表(政治)
知识管理
随机变量
政治学
统计
有机化学
化学
法学
政治
数据库
数学
作者
Jinbo Wang,Ruijin Wang,Xikai Pei
标识
DOI:10.1145/3651671.3651704
摘要
Federated Learning epitomizes a sophisticated distributed machine learning methodology, enabling collaborative neural network model training across multiple entities without necessitating the transfer of local data, thereby fortifying data privacy protection. A significant challenge in federated learning lies in the statistical heterogeneity, characterized by non-independent and identically distributed (Non-IID) local data across diverse parties. This heterogeneity can engender inconsistent optimization within individual local models. Although previous research has endeavored to tackle issues stemming from heterogeneous data, our findings indicate that these attempts have not yielded high-performance neural network models. To confront this fundamental challenge, we introduce the FedRL framework in this paper, which facilitates efficient federated learning through review learning. The core principle of FedRL involves leveraging the knowledge representation generated by the global and local model layers to conduct periodic layer-by-layer comparative learning in a reciprocal manner. This strategy rectifies local model training, leading to enhanced outcomes. Our experimental results and subsequent analysis substantiate that FedRL effectively augments model accuracy in image classification tasks, while demonstrating resilience to statistical heterogeneity across all participating entities.
科研通智能强力驱动
Strongly Powered by AbleSci AI