对抗制
计算机科学
帕斯卡(单位)
利用
人工智能
可转让性
机器学习
黑匣子
深层神经网络
Boosting(机器学习)
人工神经网络
模式识别(心理学)
计算机安全
罗伊特
程序设计语言
作者
Yaoyuan Zhang,Yu‐an Tan,Ming-Feng Lu,Tian Chen,Yuanzhang Li,Quanxin Zhang
摘要
Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black-box attacks. However, most existing adversarial attack methods rarely consider cross-task black-box attacks that are more similar to real-world scenarios. In this paper, we propose a class of random blur-based iterative methods (RBMs) to enhance the success rates of cross-task black-box attacks. By integrating the random erasing and Gaussian blur into the iterative gradient-based attacks, the proposed RBM augments the diversity of adversarial perturbation and alleviates the marginal effect caused by iterative gradient-based methods, generating the adversarial examples of stronger transferability. Experimental results on ImageNet and PASCAL VOC data sets show that the proposed RBM generates more transferable adversarial examples on image classification models, thereby successfully attacking cross-task black-box object detection models.
科研通智能强力驱动
Strongly Powered by AbleSci AI