共指
计算机科学
水准点(测量)
人工智能
边距(机器学习)
基线(sea)
推论
语言模型
分辨率(逻辑)
机器学习
变压器
强化学习
自然语言处理
海洋学
物理
地质学
电压
量子力学
地理
大地测量学
作者
Tuan Lai,Trung Bui,Doo Soon Kim
标识
DOI:10.1109/icassp43922.2022.9746254
摘要
Since the first end-to-end neural coreference resolution model was introduced, many extensions to the model have been proposed, ranging from using higher-order inference to directly optimizing evaluation metrics using reinforcement learning. Despite improving the coreference resolution performance by a large margin, these extensions add substantial extra complexity to the original model. Motivated by this observation and the recent advances in pre-trained Transformer language models, we propose a simple yet effective baseline for coreference resolution. Even though our model is a simplified version of the original neural coreference resolution model, it achieves impressive performance, outperforming all recent extended works on the public English OntoNotes benchmark. Our work provides evidence for the necessity of carefully justifying the complexity of existing or newly proposed models, as introducing a conceptual or practical simplification to an existing model can still yield competitive results.
科研通智能强力驱动
Strongly Powered by AbleSci AI