计算机科学
过度拟合
推论
稳健性(进化)
对抗制
利用
机器学习
人工智能
对手
数据建模
过程(计算)
数据挖掘
计算机安全
人工神经网络
数据库
生物化学
基因
操作系统
化学
作者
Gaoyang Liu,Zehao Tian,Jian Chen,Chen Wang,Jiangchuan Liu
标识
DOI:10.1109/tifs.2023.3303718
摘要
Federated learning (FL) is a privacy-preserving machine learning paradigm that enables multiple clients to train a unified model without disclosing their private data. However, susceptibility to membership inference attacks (MIAs) arises due to the natural inclination of FL models to overfit on the training data during the training process, thereby enabling MIAs to exploit the subtle differences in the FL model's parameters, activations, or predictions between the training and testing data to infer membership information. It is worth noting that most if not all existing MIAs against FL require access to the model's internal information or modification of the training process, yielding them unlikely to be performed in practice. In this paper, we present with TEAR the first evidence that it is possible for an honest-but-curious federated client to perform MIA against an FL system, by exploring the Temporal Evolution of the Adversarial Robustness between the training and non-training data. We design a novel adversarial example generation method to quantify the target sample's adversarial robustness, which can be utilized to obtain the membership features to train the inference model in a supervised manner. Extensive experiment results on five realistic datasets demonstrate that TEAR can achieve a strong inference performance compared with two existing MIAs, and is able to escape from the protection of two representative defenses.
科研通智能强力驱动
Strongly Powered by AbleSci AI