关系(数据库)
关系抽取
计算机科学
背景(考古学)
推论
注释
理解力
领域(数学分析)
自然语言处理
人工智能
自然语言
数据科学
情报检索
程序设计语言
数据挖掘
数学
古生物学
生物
数学分析
作者
Junpeng Li,Zixia Jia,Zilong Zheng
标识
DOI:10.18653/v1/2023.emnlp-main.334
摘要
Document-level Relation Extraction (DocRE), which aims to extract relations from a long context, is a critical challenge in achieving fine-grained structural comprehension and generating interpretable document representations. Inspired by recent advances in in-context learning capabilities emergent from large language models (LLMs), such as ChatGPT, we aim to design an automated annotation method for DocRE with minimum human effort. Unfortunately, vanilla in-context learning is infeasible for DocRE due to the plenty of predefined fine-grained relation types and the uncontrolled generations of LLMs. To tackle this issue, we propose a method integrating an LLM and a natural language inference (NLI) module to generate relation triples, thereby augmenting document-level relation datasets. We demonstrate the effectiveness of our approach by introducing an enhanced dataset known as DocGNRE, which excels in re-annotating numerous long-tail relation types. We are confident that our method holds the potential for broader applications in domain-specific relation type definitions and offers tangible benefits in advancing generalized language semantic comprehension.
科研通智能强力驱动
Strongly Powered by AbleSci AI