斯塔克伯格竞赛
计算机科学
差别隐私
趋同(经济学)
付款
上传
服务器
集合(抽象数据类型)
激励
博弈论
计算机安全
计算机网络
数据挖掘
万维网
数学
数理经济学
微观经济学
经济
程序设计语言
经济增长
作者
Guangjing Huang,Qiong Wu,Peng Sun,Qian Ma,Xu Chen
标识
DOI:10.1109/tpds.2024.3354713
摘要
As a privacy-preserving distributed learning paradigm, federated learning (FL) enables multiple client devices to train a shared model without uploading their local data. To further enhance the privacy protection performance of FL, differential privacy (DP) has been successfully incorporated into FL systems to defend against privacy attacks from adversaries. In FL with DP, how to stimulate efficient client collaboration is vital for the FL server due to the privacy-preserving nature of DP and the heterogeneity of various costs (e.g., computation cost) of the participating clients. However, this kind of collaboration remains largely unexplored in existing works. To fill in this gap, we propose a novel analytical framework based on Stackelberg game to model the collaboration behaviors among clients and the server with reward allocation as incentive in FL with DP. We first conduct rigorous convergence analysis of FL with DP and reveal how clients' multidimensional attributes would affect the convergence performance of FL model. Accordingly, we solve the Stackelberg game and derive the collaboration strategies for both clients and the server. We further devise an approximately optimal algorithm for the server to efficiently conduct the joint optimization of the client set selection, the number of global iterations, and the reward payment for the clients. Numerical evaluations using real-world datasets validate our theoretical analysis and corroborate the superior performance of the proposed solution.
科研通智能强力驱动
Strongly Powered by AbleSci AI