计算机科学
稳健性(进化)
对抗制
人工智能
高光谱成像
深度学习
上下文图像分类
背景(考古学)
机器学习
模式识别(心理学)
像素
图像(数学)
古生物学
基因
生物
化学
生物化学
作者
Bing Tu,Wangquan He,Qianming Li,Yishu Peng,Antonio Plaza
标识
DOI:10.1109/tgrs.2023.3250450
摘要
Deep neural networks play a significant role in hyperspectral image (HSI) processing, yet they can be easily fooled when trained with adversarial samples (generated by adding tiny perturbations to clean samples). These perturbations are invisible to the human eye, but can easily lead to misclassification by the deep learning model. Recent research on defense against adversarial samples in HSI classification has improved the robustness of deep networks by exploiting global contextual information. However, available methods do not distinguish between different classes of contextual information, which makes the global context unreliable and increases the success rate of attacks. To solve this problem, we propose a robust context-aware network able to defend against adversarial samples in HSI classification. The proposed model generates a global contextual representation by aggregating the features learned via dilated convolution, and then explicitly models intraclass and interclass contextual information by constructing a class context-aware learning module (including affinity loss) to further refine the global context. The module helps pixels obtain more reliable long-range dependencies and improves the overall robustness of the model against adversarial attacks. Experiments on several benchmark HSI datasets demonstrate that the proposed method is more robust and exhibits better generalization than other advanced techniques.
科研通智能强力驱动
Strongly Powered by AbleSci AI