判别式
计算机科学
杠杆(统计)
高保真
机器学习
人工智能
忠诚
生成语法
反演(地质)
对抗制
代表(政治)
特征学习
软件部署
训练集
外部数据表示
电信
古生物学
构造盆地
政治
法学
政治学
电气工程
生物
工程类
操作系统
作者
Gege Qi,Yuefeng Chen,Xiaofeng Mao,Binyuan Hui,Xiaodan Li,Rong Zhang,Hui Xue
标识
DOI:10.1145/3581783.3612072
摘要
Model Inversion (MI) attacks aim to recover the private training data from the target model, which has raised security concerns about the deployment of DNNs in practice. Recent advances in generative adversarial models have rendered them particularly effective in MI attacks, primarily due to their ability to generate high-fidelity and perceptually realistic images that closely resemble the target data. In this work, we propose a novel Dynamic Memory Model Inversion Attack (DMMIA) to leverage historically learned knowledge, which interacts with samples (during the training) to induce diverse generations. DMMIA constructs two types of prototypes to inject the information about historically learned knowledge: Intra-class Multicentric Representation (IMR) representing target-related concepts by multiple learnable prototypes, and Inter-class Discriminative Representation (IDR) characterizing the memorized samples as learned prototypes to capture more privacy-related information. As a result, our DMMIA has a more informative representation, which brings more diverse and discriminative generated results. Experiments on multiple benchmarks show that DMMIA performs better than state-of-the-art MI attack methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI