计算机科学
语音识别
语音处理
语音活动检测
语音增强
语音编码
序列标记
水准点(测量)
人工智能
变压器
降噪
任务(项目管理)
物理
管理
大地测量学
量子力学
电压
经济
地理
作者
Sanyuan Chen,Chengyi Wang,Zhengyang Chen,Yu Wu,Shujie Liu,Zhuo Chen,Jinyu Li,Naoyuki Kanda,Takuya Yoshioka,Xiong Xiao,Jian Wu,Long Zhou,Shuo Ren,Yanmin Qian,Yao Qian,Jian Wu,Michael Zeng,Xiangzhan Yu,Furu Wei
出处
期刊:IEEE Journal of Selected Topics in Signal Processing
[Institute of Electrical and Electronics Engineers]
日期:2022-07-04
卷期号:16 (6): 1505-1518
被引量:797
标识
DOI:10.1109/jstsp.2022.3188113
摘要
Self-supervised learning (SSL) achieves great success in speech recognition, while limited exploration has been attempted for other speech processing tasks. As speech signal contains multi-faceted information including speaker identity, paralinguistics, spoken content, etc., learning universal representations for all speech tasks is challenging. To tackle the problem, we propose a new pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM jointly learns masked speech prediction and denoising in pre-training. By this means, WavLM does not only keep the speech content modeling capability by the masked speech prediction, but also improves the potential to non-ASR tasks by the speech denoising. In addition, WavLM employs gated relative position bias for the Transformer structure to better capture the sequence ordering of input speech. We also scale up the training dataset from 60k hours to 94k hours. WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and brings significant improvements for various speech processing tasks on their representative benchmarks. The code and pre-trained models are available at https://aka.ms/wavlm.
科研通智能强力驱动
Strongly Powered by AbleSci AI