关系抽取
计算机科学
一般化
关系(数据库)
接头(建筑物)
学习迁移
人工智能
领域(数学分析)
信息抽取
标记数据
知识图
资源(消歧)
任务(项目管理)
自然语言处理
机器学习
数据挖掘
建筑工程
数学分析
计算机网络
数学
管理
工程类
经济
作者
Peng Da,Zhongmin Pei,Delin Mo
标识
DOI:10.1109/nnice58320.2023.10105766
摘要
Joint entity and relation extraction has achieved impressive advances in NLP, such as document understanding and knowledge graph construction. The typical methods for entity and relation extraction typically break down the joint task into several smaller components or stages for ease of implementation, but this leads to a loss of the interconnected knowledge in the triple. Hence, we propose to model the triple in one module jointly. Furthermore, the labeling of a joint entity and relation extraction tasks is costly and domain-specific; therefore, it is important to improve its performance on low-resource data and domain adaption. To address this issue, we suggest using two sources that are rich in information, namely pre-trained models on large data and multi-domain text corpora. Pretraining allows us to provide the model with the fundamental ability to perform joint entity and relationship extraction. Second, through meta-learning on multi-domain text, we can improve the model's generalization capabilities, enabling it to perform well even with limited data. We present MTL-JER, a Meta-Transfer Learning method for Joint Entity and Relation Extraction in low-resource settings in this paper. Using exhaustive experiments on five datasets, we prove that our model obtains optimal results.
科研通智能强力驱动
Strongly Powered by AbleSci AI