计算机科学
过程(计算)
差别隐私
钥匙(锁)
启发式
原始数据
编码(集合论)
数据挖掘
信息敏感性
秩(图论)
计算机安全
人工智能
组合数学
操作系统
集合(抽象数据类型)
程序设计语言
数学
作者
Zhaohua Li,Le Wang,Zhaoquan Gu,Yang Lv,Zhihong Tian
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2024-02-15
卷期号:11 (4): 6007-6019
标识
DOI:10.1109/jiot.2023.3309992
摘要
Federated learning (FL) is widely studied for local privacy protection, and it involves exchanging model parameters rather than raw data among clients. However, gradient attacks (GAs) make a malicious client or parameter server of FL infer the local data of other clients only based on the model parameters exchanged. In FL frameworks and processes, it is important to understand the features that provide heuristic information for inferring raw data, as well as how best to defend against GAs. The academic community is currently investigating this problem. In this study, we demonstrate that the labels of input samples play a key role in the success of GAs. We analyze the rank of the coefficient matrix of the non-homogeneous linear equation of gradients and input samples and propose an approach that performs special operations on the repetition and order of labels. The approach achieves a better defense effect against GAs without using a differential privacy (DP) framework. Our experimental results show that GAs fail (i.e., without leaking any valid information about local data) during the entire training process of a deep convolutional network in FL, and the accuracy of the network is less affected than that of DP. The code is available at https://github.com/zhaohuali/Label-based-Defense.
科研通智能强力驱动
Strongly Powered by AbleSci AI