变压器
计算机科学
语言模型
嵌入
语音识别
隐马尔可夫模型
人工神经网络
水准点(测量)
深层神经网络
人工智能
工程类
大地测量学
电压
地理
电气工程
作者
Yongqiang Wang,Abdelrahman Mohamed,Dieu Ngan Le,Chunxi Liu,Alex Xiao,Jay Mahadeokar,Hongzhao Huang,Andros Tjandra,Xiaohui Zhang,Frank Zhang,Christian Fuegen,Geoffrey Zweig,Michael L. Seltzer
标识
DOI:10.1109/icassp40776.2020.9054345
摘要
We propose and evaluate transformer-based acoustic models (AMs) for hybrid speech recognition. Several modeling choices are discussed in this work, including various positional embedding methods and an iterated loss to enable training deep transformers. We also present a preliminary study of using limited right context in transformer models, which makes it possible for streaming applications. We demonstrate that on the widely used Librispeech benchmark, our transformer-based AM outperforms the best published hybrid result by 19% to 26% relative when the standard n-gram language model (LM) is used. Combined with neural network LM for rescoring, our proposed approach achieves state-of-the-art results on Librispeech. Our findings are also confirmed on a much larger internal dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI