计算机科学
关系抽取
推论
先验与后验
关系(数据库)
冗余(工程)
判决
任务(项目管理)
信息抽取
人工智能
信仰传播
机器学习
基本事实
关系数据库
自然语言处理
领域(数学)
数据挖掘
算法
数学
哲学
解码方法
管理
认识论
纯数学
经济
操作系统
作者
Juan Chen,Jie Hu,Tianrui Li,Fei Teng,Shengdong Du
标识
DOI:10.1016/j.eswa.2023.122007
摘要
Relational triple extraction is a crucial task in the field of information extraction, which attempts to identify all triples from natural language text. Existing methods primarily focus on addressing the issue of overlapping triples. However, the majority of studies need to perform the same operation on all predefined relations when solving this problem, which will lead to relation redundancy. In addition, most methods have the problem of error propagation. During training, they use the ground truth labels as a priori knowledge to predict at different stages, while during inference, they must use the labels predicted in the previous stage to predict in the following stages. To address these problems, we propose an effective relation-first detection model for relational triple extraction (ERFD-RTE). The proposed model first detects the potential relations in the sentence and then performs entity recognition for each specific relation, which aims to solve the overlapping triples issue and avoid additional calculations for redundant relations. We design a random label error strategy for the error propagation problem in the training phase, which balances the difference between training and inference. Experiment results demonstrate that ERFD-RTE is superior to other baselines by improving the F1 score to 92.7% (+0.7%) on NYT-P and NYT-E, 92.9% (+0.3%) on WebNLG-P, 89.3% (+0.9%) on WebNLG-E and 83.71% (+1.5%) on ADE. Additional analysis shows that ERFD-RTE can effectively extract overlapping triples.
科研通智能强力驱动
Strongly Powered by AbleSci AI