差别隐私
计算机科学
联合学习
工作流程
机器学习
信息隐私
人工智能
计算机安全
数据挖掘
数据库
作者
Yi Zhang,Yunfan Lu,Fengxia Liu
出处
期刊:Journal of information security
[Scientific Research Publishing, Inc.]
日期:2023-01-01
卷期号:14 (02): 111-135
被引量:9
标识
DOI:10.4236/jis.2023.142008
摘要
Federated learning is a distributed machine learning technique that trains a global model by exchanging model parameters or intermediate results among multiple data sources. Although federated learning achieves physical isolation of data, the local data of federated learning clients are still at risk of leakage under the attack of malicious individuals. For this reason, combining data protection techniques (e.g., differential privacy techniques) with federated learning is a sure way to further improve the data security of federated learning models. In this survey, we review recent advances in the research of differentially-private federated learning models. First, we introduce the workflow of federated learning and the theoretical basis of differential privacy. Then, we review three differentially-private federated learning paradigms: central differential privacy, local differential privacy, and distributed differential privacy. After this, we review the algorithmic optimization and communication cost optimization of federated learning models with differential privacy. Finally, we review the applications of federated learning models with differential privacy in various domains. By systematically summarizing the existing research, we propose future research opportunities.
科研通智能强力驱动
Strongly Powered by AbleSci AI