计算机科学
对抗制
分割
人工智能
稳健性(进化)
计算机视觉
深层神经网络
城市景观
图像分割
水准点(测量)
感知
人工神经网络
机器学习
地理
艺术
生物化学
化学
神经科学
生物
视觉艺术
基因
大地测量学
作者
Xing Xu,Jingran Zhang,Yujie Li,Yichuan Wang,Yang Yang,Heng Tao Shen
标识
DOI:10.1109/tii.2020.3024643
摘要
Understanding the surrounding environment is crucial for autonomous vehicles to make correct driving decisions. In particular, urban scene segmentation is a significant integral module commonly equipped in the perception system of autonomous vehicles to understand the real scene like a human. Any missegmentation of the driving scenario can potentially result in uncontrollable consequences such as serious accidents or the exception of the perception system. In this article, we investigate the vulnerability of the popular scene segmentation models designed with the backbones of deep neural networks (DNNs), which have been shown to be sensitive to adversarial attacks. Specifically, we propose an iterative projected gradient-based attack method that can effectively fool several DNN-based segmentation models with a remarkably higher attacking successful rate, and much smaller adversarial perturbations. Moreover, we also develop an adversarial training algorithm with min-max optimization style to enrich the robustness of the scene segmentation models. Extensive experiments on the Cityscape benchmark dataset consisting of large-scale urban scene images for autonomous vehicles demonstrate the effectiveness of our proposed attack method, as well as the benefit of the adversarial training scheme for the scene segmentation models.
科研通智能强力驱动
Strongly Powered by AbleSci AI