Towards Retentive Proactive Defense Against DeepFakes
心理学
作者
Tao Jiang,Hongyi Yu,Wenjuan Meng,Peihan Qi
标识
DOI:10.1007/978-3-031-51399-2_8
摘要
In recent years, with the development of artificial intelligence, many facial manipulation methods based on deep neural networks have been developed, known as DeepFakes. Unfortunately, DeepFakes are always maliciously used, and if the spread of DeepFakes cannot be controlled in a timely manner, it will pose a certain threat to both society and individuals. Researchers have studied the detection of DeepFakes, but this type of detection belongs to post-evidence collection and still has a certain degree of negative impact. Therefore, we propose a retentive and proactive defense method to protect DeepFakes before malicious operations. The main idea is to train a perturbation generator end-to-end, and introduce the perturbation generated by the perturbation generator into the image to make it adversarial and immune to DeepFakes. White-box experiments on a typical DeepFake manipulation method (facial attribute editing) demonstrate the effectiveness of our proposed method, and a comparison with an adversarial attack PGD proves the superiority of our method in terms of similarity and inference efficiency.