可转让性
计算机科学
对抗制
领域(数学分析)
光学(聚焦)
遮罩(插图)
人工智能
多样性(政治)
缩放比例
频域
图像(数学)
机器学习
模式识别(心理学)
算法
计算机视觉
数学
艺术
数学分析
物理
几何学
罗伊特
社会学
人类学
光学
视觉艺术
作者
Peican Zhu,Fan Zhao,Sensen Guo,Keke Tang,Xingyu Li
标识
DOI:10.1016/j.cose.2023.103674
摘要
Many works have shown that the adversarial examples being generated on a known substitute model have the ability to mislead other unknown black-box models, which has attracted widespread attention. Recently, many model augmentation methods have been presented to boost the corresponding transferability of adversarial examples by transforming the images to simulate diverse models for attack. However, existing model augmentation methods focus on the transformations in a single domain and may restrict the diversity of simulated models. To overcome this limitation, we present a novel model augmentation method named Hybrid Augmentation Method (HAM). Our approach comprises two components, channel-wise scaling (CS) and spectrum masking (SM). Specifically, we first transform the images with CS in the spatial domain, which enhances the diversity of transformed images by randomly scaling the channel. Then we apply SM to randomly remove some frequency information of the images in the frequency domain, further increasing the diversity of the transformed images. Instead of confining the transformations in a single domain, we take transformations both in the spatial and frequency domain simultaneously. This enables us to get more various transformed images and largely increases the diversity of simulated models to create more powerful adversarial examples. We conduct extensive experiments to demonstrate the superiority of our method on both undefended and defense models, which largely outperforms the considered attacks. Moreover, our method can be integrated with other attacks to further enhance the adversarial transferability.
科研通智能强力驱动
Strongly Powered by AbleSci AI