计算机科学
人工智能
面部表情
稳健性(进化)
表达式(计算机科学)
图像扭曲
生成语法
发电机(电路理论)
计算机视觉
图像(数学)
图像编辑
物理
基因
量子力学
功率(物理)
化学
生物化学
程序设计语言
作者
Ching‐Ting Tu,Kuan-Lin Chen
标识
DOI:10.1109/taffc.2023.3327118
摘要
This paper proposes a data-driven approach for generating personalized smile style images for neutral expressions, which aims to produce diverse smile styles while preserving individual features. Unlike other generator models that require expensive manual facial attribute labeling, we designed an auxiliary expression attention Siamese network (EASN) to extract identity-irrelevant facial expression attention regions and guide the proposed two-stage style-expression generative adversarial network (style-exprGAN). The first generator stage generates the overall facial geometry and virtual smile features, while the second stage refines the image quality. Additionally, we introduced traditional geometry warping methods to include registered neutral expression images for consistent transformation and realistic texture fusion. Results show that the proposed method effectively synthesizes realistic and diverse smile styles while preserving individual features. Furthermore, we demonstrate the potential of our data-driven approach by applying the generated personalized smile style images to image augmentation tasks, improving the stability and robustness of facial recognition models.
科研通智能强力驱动
Strongly Powered by AbleSci AI