共指
计算机科学
安全性令牌
任务(项目管理)
跨度(工程)
水准点(测量)
人工智能
选择(遗传算法)
关系(数据库)
遮罩(插图)
分辨率(逻辑)
自然语言处理
机器学习
数据挖掘
工程类
艺术
土木工程
视觉艺术
经济
管理
地理
计算机安全
大地测量学
作者
Mandar Joshi,Danqi Chen,Yinhan Liu,Daniel S. Weld,Luke Zettlemoyer,Omer Levy
摘要
We present SpanBERT, a pre-training method that is designed to better represent and predict spans of text. Our approach extends BERT by (1) masking contiguous random spans, rather than random tokens, and (2) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. SpanBERT consistently outperforms BERT and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference resolution. In particular, with the same training data and model size as BERT large , our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0 respectively. We also achieve a new state of the art on the OntoNotes coreference resolution task (79.6% F1), strong performance on the TACRED relation extraction benchmark, and even gains on GLUE. 1
科研通智能强力驱动
Strongly Powered by AbleSci AI