编码器
计算机科学
机器翻译
人工智能
代表(政治)
简单(哲学)
对偶(语法数字)
自然语言处理
基线(sea)
翻译(生物学)
语音识别
语言学
认识论
操作系统
政治
地质学
哲学
基因
生物化学
化学
法学
海洋学
政治学
信使核糖核酸
作者
Shuming Ma,Dongdong Zhang,Ming Zhou
标识
DOI:10.18653/v1/2020.acl-main.321
摘要
Most of the existing models for document-level machine translation adopt dual-encoder structures. The representation of the source sentences and the document-level contexts are modeled with two separate encoders. Although these models can make use of the document-level contexts, they do not fully model the interaction between the contexts and the source sentences, and can not directly adapt to the recent pre-training models (e.g., BERT) which encodes multiple sentences with a single encoder. In this work, we propose a simple and effective unified encoder that can outperform the baseline models of dual-encoder models in terms of BLEU and METEOR scores. Moreover, the pre-training models can further boost the performance of our proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI