计算机科学
杠杆(统计)
可用性
对手
预处理器
深度学习
人工智能
人气
机器学习
转化(遗传学)
数据转换
数据预处理
数据挖掘
计算机安全
人机交互
生物化学
化学
数据仓库
基因
心理学
社会心理学
作者
Wei Gao,Xu Zhang,Shangwei Guo,Tianwei Zhang,Tao Xiang,Han Qiu,Yonggang Wen,Yang Liu
标识
DOI:10.1109/tpami.2023.3262813
摘要
Collaborative learning has gained great popularity due to its benefit of data privacy protection: participants can jointly train a Deep Learning model without sharing their training sets. However, recent works discovered that an adversary can fully recover the sensitive training samples from the shared gradients. Such reconstruction attacks pose severe threats to collaborative learning. Hence, effective mitigation solutions are urgently desired. In this paper, we systematically analyze existing reconstruction attacks and propose to leverage data augmentation to defeat these attacks: by preprocessing sensitive images with carefully-selected transformation policies, it becomes infeasible for the adversary to extract training samples from the corresponding gradients. We first design two new metrics to quantify the impacts of transformations on data privacy and model usability. With the two metrics, we design a novel search method to automatically discover qualified policies from a given data augmentation library. Our defense method can be further combined with existing collaborative training systems without modifying the training protocols. We conduct comprehensive experiments on various system settings. Evaluation results demonstrate that the policies discovered by our method can defeat state-of-the-art reconstruction attacks in collaborative learning, with high efficiency and negligible impact on the model performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI