表达式(计算机科学)
计算机科学
人工智能
面部表情
模式识别(心理学)
特征(语言学)
掉期(金融)
面部表情识别
图像(数学)
一致性(知识库)
计算机视觉
面部识别系统
哲学
语言学
财务
经济
程序设计语言
作者
Zhiwen Shao,Hengliang Zhu,Junshu Tang,Xuequan Lu,Lizhuang Ma
出处
期刊:IEEE transactions on image processing
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:30: 4610-4621
被引量:2
标识
DOI:10.1109/tip.2021.3073857
摘要
Facial expression transfer between two unpaired images is a challenging problem, as fine-grained expression is typically tangled with other facial attributes. Most existing methods treat expression transfer as an application of expression manipulation, and use predicted global expression, landmarks or action units (AUs) as a guidance. However, the prediction may be inaccurate, which limits the performance of transferring fine-grained expression. Instead of using an intermediate estimated guidance, we propose to explicitly transfer facial expression by directly mapping two unpaired input images to two synthesized images with swapped expressions. Specifically, considering AUs semantically describe fine-grained expression details, we propose a novel multi-class adversarial training method to disentangle input images into two types of fine-grained representations: AU-related feature and AU-free feature. Then, we can synthesize new images with preserved identities and swapped expressions by combining AU-free features with swapped AU-related features. Moreover, to obtain reliable expression transfer results of the unpaired input, we introduce a swap consistency loss to make the synthesized images and self-reconstructed images indistinguishable. Extensive experiments show that our approach outperforms the state-of-the-art expression manipulation methods for transferring fine-grained expressions while preserving other attributes including identity and pose.
科研通智能强力驱动
Strongly Powered by AbleSci AI