计算机科学
利用
对手
推论
模型攻击
机器学习
人工智能
记忆
样品(材料)
特征(语言学)
集合(抽象数据类型)
身份(音乐)
数据建模
数据集
训练集
数据挖掘
计算机安全
数学
语言学
化学
哲学
数学教育
物理
色谱法
数据库
声学
程序设计语言
作者
Depeng Chen,Xiao Liu,Jie Cui,Hong Zhong
标识
DOI:10.1145/3576915.3624384
摘要
Since machine learning model is often trained on a limited data set, the model is trained multiple times on the same data sample, which causes the model to memorize most of the training set data. Membership Inference Attacks (MIAs) exploit this feature to determine whether a data sample is used for training a machine learning model. However, in realistic scenarios, it is difficult for the adversary to obtain enough qualified samples that mark accurate identity information, especially since most samples are non-members in real world applications. To address this limitation, in this paper, we propose a new attack method called CLMIA, which uses unsupervised contrastive learning to train an attack model. Meanwhile, in CLMIA, we require only a small amount of data with known membership status to fine-tune the attack model. We evaluated the performance of the attack using ROC curves showing a higher TPR at low FPR compared to other schemes.
科研通智能强力驱动
Strongly Powered by AbleSci AI