对抗制
计算机科学
利用
人工智能
可转让性
源代码
面子(社会学概念)
正规化(语言学)
面部识别系统
对手
噪音(视频)
计算机安全
最优化问题
有界函数
信息隐私
语义学(计算机科学)
机器学习
数据挖掘
编码(内存)
钥匙(锁)
理论计算机科学
编码(集合论)
领域(数学分析)
深度学习
标杆管理
作者
Yuanbo Li,Cong Hu,Xiao‐Jun Wu
标识
DOI:10.1109/tifs.2025.3607244
摘要
The widespread application of deep learning-based face recognition (FR) systems poses significant challenges to the privacy of facial images on social media, as unauthorized FR systems can exploit these images to mine user data. Recent studies have utilized adversarial attack techniques to protect facial privacy against malicious FR systems by generating adversarial examples. However, existing noise-based and makeup-based methods produce adversarial examples with noticeable noise or undesired makeup attributes, and suffers from low transferability issues. In this paper, we propose a novel stealthy-based approach, named Dual-latent Adaptive Diffusion Protection (DADP), which generates transferable stealthy adversarial examples consistent with the source images by the diffusion model to protect facial privacy. DADP effectively harnesses adversarial information within both the semantic and diffusion latent spaces to explore adversarial latent representations. Unlike traditional methods that rely on bounded constraints and sign gradient optimization, DADP employs adaptive optimization to maximize the utilization of adversarial gradient information and introduces latent regularization to constrain the adaptive optimization process, ensuring that the protected faces maintain high privacy and natural appearance. Extensive qualitative and quantitative experiments on the public CelebA-HQ and LADN datasets demonstrate the proposed method crafts more natural-looking stealthy adversarial examples with superior black-box transferability compared to the state-of-the-art methods. The code is released at https://github.com/LiYuanBoJNU/DADP.
科研通智能强力驱动
Strongly Powered by AbleSci AI