对抗制
利用
计算机科学
稳健性(进化)
人工智能
人工神经网络
深层神经网络
机器学习
领域(数学分析)
模式识别(心理学)
数学
计算机安全
数学分析
生物化学
化学
基因
作者
Yating Ma,Ruiqi Zha,Zhichao Lian
标识
DOI:10.1109/smartworld-uic-atc-scalcom-digitaltwin-pricomp-metaverse56740.2022.00028
摘要
Perception algorithms are commonly used in ubiquitous computing. Deep neural networks have achieved significant performance in computer vision field, yet were found that exist vulnerabilities which was misclassified by adversarial examples. This issue has attracted great attention to explore methods to defend against adversarial examples. Detecting adversarial examples is a basic work for improving robustness of models. However, how to exploit an efficient feature to distinguish clean examples and adversarial ones is a challenging problem. In this paper, we investigate the adversarial examples from frequency domain and propose a framework to detect and classify adversarial attacks combing with attention mechanism. We first demonstrate that adversarial perturbations are more perceptible in the frequency domain and exploit this to achieve significant detection results. Then, combined with the advantages of the attention mechanism, an integrate network is proposed to achieve a more refined classification of attack methods. Based on the structure of ResNetl8, our results achieve optimal detection rate and high classification rate of 73% among FGSM, PGD-$\ell_{2}$, PGD$-\ell_{\infty}$, Deepfool and MI-FGSM.
科研通智能强力驱动
Strongly Powered by AbleSci AI