对抗制
人工智能
计算机科学
深度学习
雷达
摄动(天文学)
二进制数
二元分类
算法
机器学习
模式识别(心理学)
数学
支持向量机
物理
算术
电信
量子力学
作者
Teng Huang,Yongfeng Chen,Bing‐Jian Yao,Bifen Yang,Xianmin Wang,Ya Li
标识
DOI:10.1016/j.ins.2020.03.066
摘要
Target recognition based on a high-resolution range profile (HRRP) has always been a research hotspot in the radar signal interpretation field. Deep learning has been an important method for HRRP target recognition. However, recent research has shown that optical image target recognition methods based on deep learning are vulnerable to adversarial samples. Whether HRRP target recognition methods based on deep learning can be attacked remains an open question. In this paper, four methods of generating adversarial perturbations are proposed. Algorithm 1 generates the nontargeted fine-grained perturbation based on the binary search method. Algorithm 2 generates the targeted fine-grained perturbation based on the multiple-iteration method. Algorithm 3 generates the nontargeted universal adversarial perturbation (UAP) based on aggregating some fine-grained perturbations. Algorithm 4 generates the targeted universal perturbation based on scaling one fine-grained perturbation. These perturbations are used to generate adversarial samples to attack HRRP target recognition methods based on deep learning under white-box and black-box attacks. The experiments are conducted with actual radar data and show that the HRRP adversarial samples have certain aggressiveness. Therefore, HRRP target recognition methods based on deep learning have potential security risks.
科研通智能强力驱动
Strongly Powered by AbleSci AI