面部表情
面部肌肉
线性判别分析
支持向量机
模式识别(心理学)
计算机科学
人工智能
面子(社会学概念)
判别式
层次聚类
语音识别
聚类分析
心理学
社会科学
沟通
社会学
作者
Jordan Vice,Masood Mehmood Khan,Tele Tan,Iain Murray,Svetlana Yanushkevich
出处
期刊:IEEE Transactions on Computational Social Systems
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:: 1-14
标识
DOI:10.1109/tcss.2023.3334823
摘要
Macrolevel facial muscle variations, as used for building models of seven discrete facial expressions, suffice when distinguishing between macrolevel human affective states but won’t discretise continuous and dynamic microlevel variations in facial expressions. We present a hierarchical separation and classification network (HSCN) for discovering dynamic, continuous, and macro- and microlevel variations in facial expressions of affective states. In the HSCN, we first invoke an unsupervised cosine similarity-based separation method on continuous facial expression data to extract twenty-one dynamic facial expression classes from the seven common discrete affective states. The between-clusters separation is then optimized for discovering the macrolevel changes resulting from facial muscle activations. A following step in the HSCN separates the upper and lower facial regions for realizing changes pertaining to upper and lower facial muscle activations. Data from the two separated facial regions are then clustered in a linear discriminant space using similarities in muscular activation patterns. Next, the actual dynamic expression data are mapped onto discriminant features for developing a rule-based expert system that facilitates classifying twenty-one upper and twenty-one lower microexpressions. Invoking the random forest algorithm would classify twenty-one macrolevel facial expressions with 76.11% accuracy. A support vector machine (SVM), used separately on upper and lower facial regions in tandem, could classify them with respective accuracies of 73.63% and 87.68%. This work demonstrates a novel and effective method of dynamic assessment of affective states. The HSCN further demonstrates that facial muscle variations gathered from either upper, lower, or full-face would suffice classifying affective states. We also provide new insight into discovery of microlevel facial muscle variations and their utilization in dynamic assessment of facial expressions of affective states.
科研通智能强力驱动
Strongly Powered by AbleSci AI