MNIST数据库
计算机科学
信息瓶颈法
瓶颈
稳健性(进化)
基本事实
离群值
次梯度方法
功能(生物学)
趋同(经济学)
加速
相互信息
人工智能
算法
数学优化
机器学习
深度学习
数学
经济
基因
操作系统
嵌入式系统
经济增长
进化生物学
化学
生物化学
生物
作者
Md Palash Uddin,Yong Xiang,Xuequan Lu,John Yearwood,Longxiang Gao
标识
DOI:10.1109/tsc.2022.3187962
摘要
Existing Federated Learning (FL) algorithms generally suffer from high communication costs and data heterogeneity due to the use of conventional loss function for local model update and the equal consideration of each local model for global model aggregation. In this paper, we propose a novel FL approach to address the above issues. For local model update, we propose a disentangled Information Bottleneck (IB) principle-based loss function. For global model aggregation, we suggest a model selection strategy based on Mutual Information (MI). Particularly, we design a Lagrangian-based loss function using the IB principle and "disentanglement" for maximizing MI between the ground truth and model prediction and minimizing MI between the intermediate representations. We calculate MI ratio between the ground truth and model prediction, and between the original input and ground truth to select the effective models for aggregation. We analyze the theoretical optimal cost of the loss function and manifest optimal convergence rate, and quantify the outlier robustness of the aggregation scheme. Experiments demonstrate the superiority of the proposed FL approach, in terms of testing performance and communication speedup (i.e., 3.00-14.88 times for IID MNIST, 2.5-50.75 times for non-IID MNIST, 1.87-18.40 times for IID CIFAR-10, and 1.24-2.10 times for non-IID MIMIC-III).
科研通智能强力驱动
Strongly Powered by AbleSci AI