人工智能
计算机科学
面部表情
光流
运动(物理)
水准点(测量)
模式识别(心理学)
变压器
深度学习
动作识别
特征提取
计算机视觉
机器学习
语音识别
图像(数学)
工程类
大地测量学
电压
电气工程
地理
班级(哲学)
作者
Xinqi Fan,Xueli Chen,Mingjie Jiang,Ali Raza Shahid,Hong Yan
标识
DOI:10.1109/cvpr52729.2023.01329
摘要
Facial micro-expressions (MEs) refer to brief spontaneous facial movements that can reveal a person's genuine emotion. They are valuable in lie detection, criminal analysis, and other areas. While deep learning-based ME recognition (MER) methods achieved impressive success, these methods typically require pre-processing using conventional optical flow-based methods to extract facial motions as inputs. To overcome this limitation, we proposed a novel MER framework using self-supervised learning to extract facial motion for ME (SelfME). To the best of our knowledge, this is the first work using an automatically self-learned motion technique for MER. However, the self-supervised motion learning method might suffer from ignoring symmetrical facial actions on the left and right sides of faces when extracting fine features. To address this issue, we developed a symmetric contrastive vision transformer (SCViT) to constrain the learning of similar facial action features for the left and right parts of faces. Experiments were conducted on two benchmark datasets showing that our method achieved state-of-the-art performance, and ablation studies demonstrated the effectiveness of our method.
科研通智能强力驱动
Strongly Powered by AbleSci AI