对抗制
可转让性
仿射变换
计算机科学
转化(遗传学)
黑匣子
深度学习
人工智能
图像(数学)
透视图(图形)
机器学习
深层神经网络
过程(计算)
数据挖掘
模式识别(心理学)
数学
生物化学
化学
罗伊特
纯数学
基因
操作系统
作者
X. Wang,Chunguang Huang,Hai Cheng
标识
DOI:10.1016/j.csi.2022.103693
摘要
Image classification models based on deep neural networks have made great improvements on various tasks, but they are still vulnerable to adversarial examples that could increase the possibility of misclassification. Various methods are proposed to generate adversarial examples under white-box attack circumstances that have achieved a high success rate. However, most existing adversarial attacks only achieve poor transferability when attacking other unknown models with the black-box scenario settings. In this paper, we propose a new method that generates adversarial examples based on affine-shear transformation from the perspective of deep model input layers and maximizes the loss function during each iteration. This method could improve the transferability and the input diversity of adversarial examples, and we also optimize the above adversarial examples generation process with Nesterov accelerated gradient. Extensive experiments on ImageNet Dataset indicate that our proposed method could exhibit higher transferability and achieve higher attack success rates on both single model settings and ensemble-model settings. It can also combine with other gradient-based methods and image transformation-based methods to further build more powerful attacks. • The existence of adversarial examples would pose to society and security issues are analyzed. • Existing white-box and black-box attack methods and some defenses methods are listed. • The proposed AST can integrate into NI-FGSM to build more powerful attacks. • It can also improving the transferability of adversarial examples under black-box settings. • We combined AST with other image transformation-based methods, denoted as AST-NI-TI-DIM.
科研通智能强力驱动
Strongly Powered by AbleSci AI