可转让性
对抗制
计算机科学
转化(遗传学)
数据挖掘
人工智能
机器学习
计算机安全
生物化学
化学
罗伊特
基因
作者
Jinbang Hong,Keke Tang,Chao Gao,Songxin Wang,Sensen Guo,Peican Zhu
标识
DOI:10.1007/978-3-031-10989-8_39
摘要
In the real world, blackbox attacks seem to be widely existed due to the lack of detailed information of models to be attacked. Hence, it is desirable to obtain adversarial examples with high transferability which will facilitate practical adversarial attacks. Instead of adopting traditional input transformation approaches, we propose a mechanism to derive masked images through removing some regions from the initial input images. In this manuscript, the removed regions are spatially uniformly distributed squares. For comparison, several transferable attack methods are adopted as the baselines. Eventually, extensive empirical evaluations are conducted on the standard ImageNet dataset to validate the effectiveness of GM-Attack. As indicated, our GM-Attack can craft more transferable adversarial examples compared with other input transformation methods and attack success rate on Inc-v4 has been improved by 6.5% over state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI