计算机科学
趋同(经济学)
联合学习
样品(材料)
选择(遗传算法)
采样(信号处理)
功能(生物学)
机器学习
分布式计算
数据挖掘
人工智能
滤波器(信号处理)
经济
进化生物学
生物
计算机视觉
色谱法
经济增长
化学
作者
Lingshuang Cai,Di Lin,Jiale Zhang,Shui Yu
标识
DOI:10.1109/icc40277.2020.9148586
摘要
Federated learning is a state-of-the-art technology used in the fog computing, which allows distributed learning to train cross-device data while achieving efficient performance. Many current works have optimized the federated learning algorithm in homogeneous networks. However, in the actual application scenario of distributed learning, data is independently generated by each device, and this non-homologous data has different distribution characteristics. Therefore, the data used by each device for local learning is unbalanced and non-IID, and the heterogeneity of data affects the performance of federated learning and slows down the convergence. In this paper, we present a dynamic sample selection optimization algorithm, FedSS, to tackle heterogeneous data in federated learning. FedSS dynamically selects the training sample size during the gradient iteration based on the locally available data size, to settle the expensive evaluations of the local objective function with a massive amount of dataset. We theoretically analyze the convergence and present the complexity estimates of our framework when learning large data from unbalanced distribution. Our experimental results show that the use of dynamic sampling methods can effectively improve the convergence speed with heterogeneous data, and keep computational costs low while achieving the desired accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI