计算机科学
语境化
语言模型
杠杆(统计)
语音识别
基线(sea)
背景(考古学)
任务(项目管理)
人工智能
一般化
自然语言处理
口译(哲学)
程序设计语言
古生物学
数学分析
海洋学
数学
管理
经济
生物
地质学
作者
Egor Lakomkin,Chunyang Wu,Yassir Fathullah,Ozlem Kalinli,Michael L. Seltzer,Christian Fuegen
标识
DOI:10.1109/icassp48485.2024.10446898
摘要
In recent years, Large Language Models (LLMs) have garnered significant attention from the research community due to their exceptional performance and generalization capabilities. In this paper, we introduce a novel method for contextualizing speech recognition models incorporating LLMs. Our approach casts speech recognition as a mixed-modal language modeling task based on a pretrained LLM. We use audio features, along with optional text tokens for context, to train the system to complete transcriptions in a decoder-only fashion. As a result, the system implicitly learns how to leverage unstructured contextual information during training. Our empirical results demonstrate a significant improvement in performance, with a 6% WER reduction when additional textual context is provided. Moreover, we find that our method performs competitively, improving by 7.5% WER overall and 17% WER on rare words, compared to a baseline contextualized RNN-T system that has been trained on a speech dataset more than twenty-five times larger. Overall, we demonstrate that by adding only a handful of trainable parameters via adapters, we can unlock the contextualized speech recognition capability of the pretrained LLM while maintaining the same text-only input functionality.
科研通智能强力驱动
Strongly Powered by AbleSci AI