计算机科学
人工智能
光流
图形
利用
光学(聚焦)
阅读(过程)
模式识别(心理学)
计算机视觉
语音识别
图像(数学)
理论计算机科学
物理
计算机安全
法学
政治学
光学
作者
Changchong Sheng,Xinzhong Zhu,Huiying Xu,Matti Pietikäinen,Li Liu
标识
DOI:10.1109/tmm.2021.3102433
摘要
The goal of this work is to recognize words, phrases, and sentences being spoken by a talking face without given the audio. Current deep learning approaches for lip reading focus on exploring the appearance and optical flow information of videos. However, these methods do not fully exploit the characteristics of lip motion. In addition to appearance and optical flow, the mouth contour deformation usually conveys significant information that is complementary to others. However, the modeling of dynamic mouth contour has received little attention than that of appearance and optical flow. In this work, we propose a novel model of dynamic mouth contours called Adaptive Semantic-Spatio-Temporal Graph Convolution Network (ASST-GCN), to go beyond previous methods by automatically learning both the spatial and temporal information from videos. To combine the complementary information from appearance and mouth contour, a two-stream visual front-end network is proposed. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art lip reading methods on several large-scale lip reading benchmarks.
科研通智能强力驱动
Strongly Powered by AbleSci AI