计算机科学
架空(工程)
联合学习
个性化
自编码
独立同分布随机变量
带宽(计算)
数据建模
人工智能
训练集
数据挖掘
机器学习
计算机网络
数据库
深度学习
统计
数学
万维网
随机变量
操作系统
作者
Haomiao Yang,Mengyu Ge,Kunlan Xiang,Xuejun Bai,Hongwei Li
出处
期刊:IEEE Systems Journal
[Institute of Electrical and Electronics Engineers]
日期:2023-05-24
卷期号:17 (3): 4798-4808
被引量:9
标识
DOI:10.1109/jsyst.2023.3274197
摘要
Federated learning (FL), collaboratively training a shared global model without exchanging and centralizing local data, provides a promising solution for privacy preserving. On the other hand, it is faced with two main challenges: First, high communication cost, and second, low model quality due to imbalanced or nonindependent and identically distributed (non-IID) data. In this article, we propose FedVAE, an FL framework based on variational autoencoder (VAE) for remote patient monitoring. FedVAE contains two lightweight VAEs: one for projecting data onto a lower dimensional space with similar distribution so as to alleviate the issues of excessive communication overhead and slow convergence rate caused by non-IID data, and the other for shunning training bias due to imbalanced data distribution through generating minority class samples. In general, the proposed FedVAE can improve the overall performance of FL models while consuming only a small amount of communication bandwidth. The experimental results show that the area under the curve (AUC) value of FedVAE can reach 0.9937, which is even higher than that of the traditional centralized model (0.9931). Besides, fine-tuning the global model with personalization can raise the average AUC to 0.9947. Moreover, compared with vanilla FL, FedVAE shows 0.87% improvement in AUC while reducing communication traffic by at least 95%.
科研通智能强力驱动
Strongly Powered by AbleSci AI