计算机科学
差别隐私
初始化
联合学习
特征(语言学)
趋同(经济学)
适应(眼睛)
信息隐私
人工智能
加权
机器学习
计算机安全
数据挖掘
医学
语言学
哲学
物理
放射科
光学
经济
程序设计语言
经济增长
作者
Fatima Zahra Errounda,Yan Liu
标识
DOI:10.1016/j.future.2023.07.033
摘要
Differential privacy is the de-facto technique for protecting the individuals in the training dataset and the learning models in deep learning. However, the technique presents two limitations when applied to vertical federated learning, where several organizations collaborate to train a common global model. First, it treats all the training dataset features similarly regardless of the organizations’ heterogeneous privacy requirements. Second, most existing works distribute the privacy budget uniformly across training steps, disregarding the impact of the dynamic changes of local gradients on the model’s privacy and utility balance. This paper proposes the Adaptive differential privacy for Vertical Federated Learning (AdaVFL) protocol that tackles these limitations. We estimate the organization’s feature impact on the global model and design two weighting strategies that adaptively assign privacy budgets to each organization for heterogeneously protecting its features. Moreover, we carefully adjust the privacy budget to the model’s convergence at each training iteration using a closed feedback loop to improve the learning model’s utility. We experimentally evaluate AdaVFL on two public datasets (Bike New York and Yelp reviews) with a vertical federated learning framework for mobility forecasting in Pytorch. We show that the feature-level budget initialization improves the resiliency to a state-of-the-art feature privacy attack by up to 25%. Furthermore, the experimental evaluation demonstrates that the adaptive privacy budget increases the accuracy by up to 15% on average compared to the state-of-the-art budget allocation strategies.
科研通智能强力驱动
Strongly Powered by AbleSci AI