对抗制
计算机科学
影子(心理学)
计算机安全
人工智能
心理学
心理治疗师
作者
Jiatong Liu,Mingcheng Zhang,Jianpeng Ke,Lina Wang
标识
DOI:10.1109/icassp48485.2024.10448251
摘要
With the emergence of techniques called DeepFakes, there has been a notable proliferation of DeepFake detectors rooted in deep learning. These detectors aim to expose subtle distinctions between genuine and counterfeit facial images across spatial, frequency, and physiological domains. Unfortunately, these detectors are susceptible to adversarial attacks. In this study, we introduce a novel transferable adversarial attack named AdvShadow, designed to attack DeepFake detectors by leveraging natural shadows in real-life. The proposed AdvShadow comprises three components: random shadow generator, shadow overlay network, and adversarial shadow generation. Initially, we construct a random shadowed facial dataset, utilizing additional shadow overlay network to produce adversarial samples for training. Then we generate adversarial shadows for DeepFake datasets, mitigating the disparities of luminance between real and synthesized images. Through extensive experiments, we demonstrate the effectiveness and transferability of AdvShadow for attacking under black-box settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI