Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning
计算机科学
碎片(计算)
分布式计算
作者
H. Masuda,Keisuke Kita,Yohei Koizumi,Junji Takemasa,Takeshi Hasegawa
出处
期刊:Workshop on Local and Metropolitan Area Networks日期:2021-07-12被引量:1
标识
DOI:10.1109/lanman52105.2021.9478813
摘要
Federated learning is a privacy-preserving learning system where participants locally update a shared model with their own training data. Despite the advantage that training data are not sent to a server, there is still a risk that a state-of-the-art model inversion attack, which may be conducted by the server, infers training data from the models updated by the participants, referred to as individual models. A solution to prevent such attacks is differential privacy, where each participant adds noise to the individual model before sending it to the server. Differential privacy, however, sacrifices the quality of the shared model in compensation for the fact that participants' training data are not leaked. This paper proposes a federated learning system that is resistant to model inversion attacks without sacrificing the quality of the shared model. The core idea is that each participant divides the individual model into model fragments, shuffles, and aggregates them to prevent adversaries from inferring training data. The other benefit of the proposed system is that the resulting shared model is identical to the shared model generated with the naive federated learning.