计算机科学
语音识别
语音增强
话筒
混响
判别式
语音处理
人工神经网络
水准点(测量)
声学模型
语音活动检测
深度学习
字错误率
人工智能
降噪
声学
电信
物理
大地测量学
声压
地理
作者
Bo Wu,Kehuang Li,Fengpei Ge,Zhen Huang,Minglei Yang,Sabato Marco Siniscalchi,Chin‐Hui Lee
出处
期刊:IEEE Journal of Selected Topics in Signal Processing
[Institute of Electrical and Electronics Engineers]
日期:2017-09-26
卷期号:11 (8): 1289-1300
被引量:73
标识
DOI:10.1109/jstsp.2017.2756439
摘要
We propose an integrated end-to-end automatic speech recognition (ASR) paradigm by joint learning of the front-end speech signal processing and back-end acoustic modeling. We believe that "only good signal processing can lead to top ASR performance" in challenging acoustic environments. This notion leads to a unified deep neural network (DNN) framework for distant speech processing that can achieve both high-quality enhanced speech and high-accuracy ASR simultaneously. Our goal is accomplished by two techniques, namely: (i) a reverberation-time-aware DNN based speech dereverberation architecture that can handle a wide range of reverberation times to enhance speech quality of reverberant and noisy speech, followed by (ii) DNN-based multicondition training that takes both clean-condition and multicondition speech into consideration, leveraging upon an exploitation of the data acquired and processed with multichannel microphone arrays, to improve ASR performance. The final end-to-end system is established by a joint optimization of the speech enhancement and recognition DNNs. The recent REverberant Voice Enhancement and Recognition Benchmark (REVERB) Challenge task is used as a test bed for evaluating our proposed framework. We first report on superior objective measures in enhanced speech to those listed in the 2014 REVERB Challenge Workshop on the simulated data test set. Moreover, we obtain the best single-system word error rate (WER) of 13.28% on the 1-channel REVERB simulated data with the proposed DNN-based pre-processing algorithm and clean-condition training. Leveraging upon joint training with more discriminative ASR features and improved neural network based language models, a low single-system WER of 4.46% is attained. Next, a new multi-channel-condition joint learning and testing scheme delivers a state-of-the-art WER of 3.76% on the 8-channel simulated data with a single ASR system. Finally, we also report on a preliminary yet promising experimentation with the REVERB real test data.
科研通智能强力驱动
Strongly Powered by AbleSci AI