变压器
计算机科学
重叠-添加方法
语音识别
卷积(计算机科学)
人工智能
自然语言处理
数学
电气工程
傅里叶变换
工程类
电压
人工神经网络
分数阶傅立叶变换
数学分析
傅里叶分析
作者
Anmol Gulati,James Qin,Chung‐Cheng Chiu,Niki Parmar,Yu Zhang,Jiahui Yu,Wei Han,Shibo Wang,Zhengdong Zhang,Yonghui Wu,Ruoming Pang
标识
DOI:10.21437/interspeech.2020-3015
摘要
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs).Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively.In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way.To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer.Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies.On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3%without using a language model and 1.9%/3.9%with an external language model on test/testother.We also observe competitive performance of 2.7%/6.3%with a small model of only 10M parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI