人体躯干
手势
计算机科学
杠杆(统计)
鉴别器
语音识别
人工智能
方向(向量空间)
手势识别
计算机视觉
数学
医学
电信
几何学
探测器
解剖
作者
Hexiang Wang,Fengqi Liu,Ran Yi,Lizhuang Ma
标识
DOI:10.1109/icip49359.2023.10222259
摘要
Co-speech gesture generation is the task of synthesizing gesture sequences synchronized with an input audio signal. Previous methods try to estimate upper body gesture as a whole, ignoring the different mapping relations between audio and different body parts, which leads to poor overall results especially bad hand shapes. In this paper, we propose a novel three-branch co-speech gesture generation framework to obtain better results. In particular, we propose a Torso2Hand Prior Learning module (T2HPL) to leverage torso information as an extra prior to enhance hand pose prediction, and carefully design a hand shape discriminator to improve the authenticity of generated hand shape. In addition, an arm orientation loss is designed to encourage the network to generate torso part with better semantic expressiveness. Experiments on dataset of four different speakers demonstrate the superiority of our method over the state-of-the-art approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI