联合学习
差别隐私
计算机科学
背景(考古学)
集合(抽象数据类型)
透视图(图形)
差速器(机械装置)
私人信息检索
协议(科学)
信息隐私
计算机安全
数据挖掘
人工智能
医学
古生物学
替代医学
病理
工程类
生物
程序设计语言
航空航天工程
作者
R. Geyer,Tassilo Klein,Moin Nabi
出处
期刊:Cornell University - arXiv
日期:2017-01-01
被引量:621
标识
DOI:10.48550/arxiv.1712.07557
摘要
Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI