计算机科学
MNIST数据库
联合学习
班级(哲学)
人工智能
机器学习
建筑
数据挖掘
深度学习
艺术
视觉艺术
作者
Zhihe Zhao,Feng Yang,Guirong Liang
标识
DOI:10.1007/978-981-99-8546-3_18
摘要
Federated learning is a distributed machine learning paradigm that allows model training without centralizing sensitive data in a single place. However, non independent and identical distribution (non-IID) data can lead to degraded learning performance in federated learning. Data augmentation schemes have been proposed to address this issue, but they often require sharing clients’ original data, which poses privacy risks. To address these challenges, we propose FedDDA, a data augmentation-based federated learning architecture that uses diffusion models to generate data conforming to the global class distribution and alleviate the non-IID data problem. In FedDDA, a diffusion model is trained through federated learning and then used for data augmentation, thus mitigating the degree of non-IID data without disclosing clients’ original data. Our experiments on non-IID settings with various configurations show that FedDDA significantly outperforms FedAvg, with up to 43.04% improvement on the Cifar10 dataset and up to 20.05% improvement on the Fashion-MNIST dataset. Additionally, we find that relatively low-quality generated samples that conform to the global class distribution still improve federated learning performance considerably.
科研通智能强力驱动
Strongly Powered by AbleSci AI