计算机科学
自然语言生成
自动汇总
答疑
语言模型
自然语言处理
人工智能
变压器
自然语言理解
NIST公司
自然语言
生成语法
背景(考古学)
水准点(测量)
电压
古生物学
物理
生物
地理
量子力学
大地测量学
作者
Li Dong,Nan Yang,Wenhui Wang,Furu Wei,Xiaodong Liu,Yu Wang,Jianfeng Gao,Ming Zhou,Hsiao‐Wuen Hon
出处
期刊:Cornell University - arXiv
日期:2019-05-08
卷期号:32: 13042-13054
被引量:508
摘要
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on. UniLM compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Moreover, UniLM achieves new state-of-the-art results on five natural language generation datasets, including improving the CNN/DailyMail abstractive summarization ROUGE-L to 40.51 (2.04 absolute improvement), the Gigaword abstractive summarization ROUGE-L to 35.75 (0.86 absolute improvement), the CoQA generative question answering F1 score to 82.5 (37.1 absolute improvement), the SQuAD question generation BLEU-4 to 22.12 (3.75 absolute improvement), and the DSTC7 document-grounded dialog response generation NIST-4 to 2.67 (human performance is 2.65). The code and pre-trained models are available at https://github.com/microsoft/unilm.
科研通智能强力驱动
Strongly Powered by AbleSci AI