循环神经网络
计算机科学
初始化
编码器
语音识别
字错误率
语言模型
杠杆(统计)
端到端原则
人工智能
连接主义
人工神经网络
操作系统
程序设计语言
作者
Hu Hu,Rui Zhao,Jinyu Li,Liang Lu,Yifan Gong
标识
DOI:10.1109/icassp40776.2020.9054663
摘要
Recently, the recurrent neural network transducer (RNN-T) architecture has become an emerging trend in end-to-end automatic speech recognition research due to its advantages of being capable for online streaming speech recognition. However, RNN-T training is made difficult by the huge memory requirements, and complicated neural structure. A common solution to ease the RNN-T training is to employ connectionist temporal classification (CTC) model along with RNN language model (RNNLM) to initialize the RNN-T parameters. In this work, we conversely leverage external alignments to seed the RNN-T model. Two different pre-training solutions are explored, referred to as encoder pre-training, and whole-network pre-training respectively. Evaluated on Microsoft 65,000 hours anonymized production data with personally identifiable information removed, our proposed methods can obtain significant improvement. In particular, the encoder pre-training solution achieved a 10% and a 8% relative word error rate reduction when compared with random initialization and the widely used CTC+RNNLM initialization strategy, respectively. Our solutions also significantly reduce the RNN-T model latency from the baseline.
科研通智能强力驱动
Strongly Powered by AbleSci AI