联合学习
计算机科学
软件部署
训练集
标记数据
人工智能
绩效改进
机器学习
操作系统
运营管理
经济
作者
Xiao-Xiang Wei,Hua Huang
标识
DOI:10.1109/tnnls.2022.3233093
摘要
Federated semisupervised learning (FSSL) aims to train models with both labeled and unlabeled data in the federated settings, enabling performance improvement and easier deployment in realistic scenarios. However, the nonindependently identical distributed data in clients leads to imbalanced model training due to the unfair learning effects on different classes. As a result, the federated model exhibits inconsistent performance on not only different classes, but also different clients. This article presents a balanced FSSL method with the fairness-aware pseudo-labeling (FAPL) strategy to tackle the fairness issue. Specifically, this strategy globally balances the total number of unlabeled data samples which is capable to participate in model training. Then, the global numerical restrictions are further decomposed into personalized local restrictions for each client to assist the local pseudo-labeling. Consequently, this method derives a more fair federated model for all clients and gains better performance. Experiments on image classification datasets demonstrate the superiority of the proposed method over the state-of-the-art FSSL methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI