对抗制
计算机科学
面部识别系统
人工智能
分类器(UML)
计算机视觉
人脸识别大挑战
面子(社会学概念)
计算机安全
模式识别(心理学)
人脸检测
社会科学
社会学
作者
Mikhail Pautov,Grigorii Melnikov,Edgar Kaziakhmedov,Klim Kireev,Aleksandr Petiushko
出处
期刊:Cornell University - arXiv
日期:2019-10-01
被引量:17
标识
DOI:10.1109/sibircon48586.2019.8958134
摘要
Recent works showed the vulnerability of image classifiers to adversarial attacks in the digital domain. However, the majority of attacks involve adding small perturbation to an image to fool the classifier. Unfortunately, such procedures can not be used to conduct a real-world attack, where adding an adversarial attribute to the photo is a more practical approach. In this paper, we study the problem of real-world attacks on face recognition systems. We examine security of one of the best public face recognition systems, LResNet100E-IR with ArcFace loss, and propose a simple method to attack it in the physical world. The method suggests creating an adversarial patch that can be printed, added as a face attribute and photographed; the photo of a person with such attribute is then passed to the classifier such that the classifier's recognized class changes from correct to the desired one. Proposed generating procedure allows projecting adversarial patches not only on different areas of the face, such as nose or forehead but also on some wearable accessory, such as eyeglasses.
科研通智能强力驱动
Strongly Powered by AbleSci AI