计算机科学
架空(工程)
瓶颈
加速
二次增长
骨料(复合)
计算机网络
分布式计算
算法
嵌入式系统
操作系统
复合材料
材料科学
作者
Jinhyun So,Başak Güler,A. Salman Avestimehr
出处
期刊:IEEE journal on selected areas in information theory
[Institute of Electrical and Electronics Engineers]
日期:2021-01-26
卷期号:2 (1): 479-489
被引量:273
标识
DOI:10.1109/jsait.2021.3054610
摘要
Federated learning is a distributed framework for training machine learning models over the data residing at mobile devices, while protecting the privacy of individual users. A major bottleneck in scaling federated learning to a large number of users is the overhead of secure model aggregation across many users. In particular, the overhead of the state-of-the-art protocols for secure model aggregation grows quadratically with the number of users. In this article, we propose the first secure aggregation framework, named Turbo-Aggregate, that in a network with N users achieves a secure aggregation overhead of O(NlogN), as opposed to O(N 2 ), while tolerating up to a user dropout rate of 50%. Turbo-Aggregate employs a multi-group circular strategy for efficient model aggregation, and leverages additive secret sharing and novel coding techniques for injecting aggregation redundancy in order to handle user dropouts while guaranteeing user privacy. We experimentally demonstrate that Turbo-Aggregate achieves a total running time that grows almost linear in the number of users, and provides up to 40× speedup over the state-of-the-art protocols with up to N=200 users. Our experiments also demonstrate the impact of model size and bandwidth on the performance of Turbo-Aggregate.
科研通智能强力驱动
Strongly Powered by AbleSci AI