机器翻译
计算机科学
自然语言处理
翻译(生物学)
人工智能
语言模型
化学
生物化学
基因
信使核糖核酸
作者
Liangyou Li,Xin Jiang,Qun Liu
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:15
标识
DOI:10.48550/arxiv.1911.03110
摘要
Previous work on document-level NMT usually focuses on limited contexts because of degraded performance on larger contexts. In this paper, we investigate on using large contexts with three main contributions: (1) Different from previous work which pertrained models on large-scale sentence-level parallel corpora, we use pretrained language models, specifically BERT, which are trained on monolingual documents; (2) We propose context manipulation methods to control the influence of large contexts, which lead to comparable results on systems using small and large contexts; (3) We introduce a multi-task training for regularization to avoid models overfitting our training corpora, which further improves our systems together with a deeper encoder. Experiments are conducted on the widely used IWSLT data sets with three language pairs, i.e., Chinese--English, French--English and Spanish--English. Results show that our systems are significantly better than three previously reported document-level systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI