计算机科学
约束(计算机辅助设计)
联合学习
计算
基线(sea)
还原(数学)
数据流
机器学习
人工智能
分布式学习
分布式计算
算法
电信
工程类
数学
教育学
机械工程
几何学
海洋学
地质学
心理学
作者
Xin Yao,Chaofeng Huang,Lifeng Sun
标识
DOI:10.1109/vcip.2018.8698609
摘要
Federated learning algorithm solves the problem of training machine learning models over distributed networks that consist of a massive amount of modern smart devices. It overcomes the challenge of privacy preservation, unbalanced and Non-IID data distributions, and does its best to reduce the required communication rounds. However, communication costs are still the principle constraint compared to other factors, such as computation costs. In this paper, we adopt a two-stream model with MMD (Maximum Mean Discrepancy) constraint instead of the single model to be trained on devices in standard federated learning settings. Following experiments show that the proposed model outperforms baseline methods, especially in Non-IID data distributions, and achieves a reduction of more than 20% in required communication rounds.
科研通智能强力驱动
Strongly Powered by AbleSci AI