计算机科学
超参数
边距(机器学习)
语音识别
训练集
人工智能
机器学习
视听
深度学习
自然语言处理
多媒体
作者
Pingchuan Ma,Stavros Petridis,Maja Pantić
标识
DOI:10.1038/s42256-022-00550-z
摘要
Visual speech recognition (VSR) aims to recognize the content of speech based on lip movements, without relying on the audio stream. Advances in deep learning and the availability of large audio-visual datasets have led to the development of much more accurate and robust VSR models than ever before. However, these advances are usually due to the larger training sets rather than the model design. Here we demonstrate that designing better models is equally as important as using larger training sets. We propose the addition of prediction-based auxiliary tasks to a VSR model, and highlight the importance of hyperparameter optimization and appropriate data augmentations. We show that such a model works for different languages and outperforms all previous methods trained on publicly available datasets by a large margin. It even outperforms models that were trained on non-publicly available datasets containing up to to 21 times more data. We show, furthermore, that using additional training data, even in other languages or with automatically generated transcriptions, results in further improvement.
科研通智能强力驱动
Strongly Powered by AbleSci AI