计算机科学
上传
对手
方案(数学)
计算机安全
人工智能
机器学习
万维网
数学
数学分析
作者
Jingcheng Song,Weizheng Wang,Thippa Reddy Gadekallu,Jianyu Cao,Yining Liu
标识
DOI:10.1109/tnse.2022.3153519
摘要
Federated learning (FL) is a kind of privacy-awaremachine learning, in which the machine learning models are trained on the users' side and then the model updates are transmitted to the server for aggregating. As the data owners need not upload their data, FL is a privacy-persevering machine learning model. However, FL is weak as it suffers from a reverse attack, in which an adversary can get users' data by analyzing the user uploaded model. Motivated by this, in this paper, based on the secret sharing, we design, an efficient privacy-preserving data aggregation mechanism for FL, to resist the reverse attack, which can aggregate users' trained models secretly without leaking the user's model. Moreover, EPPDA has efficient fault tolerance for the user disconnection. Even if a large number of users are disconnected when the protocol runs, EPPDA will execute normally. Analysis shows that the EPPDA can provide a sum of locally trained models to the server without leaking any single user's model. Moreover, adversary can not get any non-public information from the communication channel. Efficiency verification proves that the EPPDA not only protects users' privacy but also needs fewer computing and communication resources.
科研通智能强力驱动
Strongly Powered by AbleSci AI