计算机科学
联合学习
泄漏(经济)
超参数
信息泄露
边缘设备
计算机安全
机器学习
人工智能
数据挖掘
云计算
操作系统
宏观经济学
经济
作者
Wüchang Wei,Линг Лиу,Margaret L. Loper,K. K. Chow,Mehmet Emre Gürsoy,Stacey Truex,Yanzhao Wu
标识
DOI:10.1007/978-3-030-58951-6_27
摘要
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server. However, recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks and intrude the client privacy regarding its training data. In this paper, we present a principled framework for evaluating and comparing different forms of client privacy leakage attacks. We first provide formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training (e.g., local gradient or weight update vector). We then analyze how different hyperparameter configurations in federated learning and different settings of the attack algorithm may impact on both attack effectiveness and attack cost. Our framework also measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols. Our experiments additionally include some preliminary mitigation strategies to highlight the importance of providing a systematic attack evaluation framework towards an in-depth understanding of the various forms of client privacy leakage threats in federated learning and developing theoretical foundations for attack mitigation.
科研通智能强力驱动
Strongly Powered by AbleSci AI