计算机科学
推论
黑匣子
构造(python库)
人工智能
自编码
机器学习
先验与后验
钥匙(锁)
数据挖掘
数据建模
深度学习
计算机安全
数据库
认识论
程序设计语言
哲学
作者
Gaoyang Liu,Tianlong Xu,Rui Zhang,Zixiong Wang,Chen Wang,Ling Liu
标识
DOI:10.1109/tifs.2023.3324772
摘要
Machine Learning (ML) techniques have been applied to many real-world applications to perform a wide range of tasks. In practice, ML models are typically deployed as the black-box APIs to protect the model owner's benefits and/or defend against various privacy attacks. In this paper, we present Gradient-Leaks as the first evidence showcasing the possibility of performing membership inference attacks (MIAs), with mere black-box access, which aim to determine whether a data record was utilized to train a given target ML model or not. The key idea of Gradient-Leaks is to construct a local ML model around the given record which locally approximates the target model's prediction behavior. By extracting the membership information of the given record from the gradient of the substituted local model using an intentionally modified autoencoder, Gradient-Leaks can thus breach the membership privacy of the target model's training data in an unsupervised manner, without any priori knowledge about the target model's internals or its training data. Extensive experiments on different types of ML models with real-world datasets have shown that Gradient-Leaks can achieve a better performance compared with state-of-the-art attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI