计算机科学
接头(建筑物)
指针(用户界面)
背景(考古学)
窗口(计算)
机制(生物学)
护盾
人工智能
地质学
万维网
建筑工程
岩石学
古生物学
哲学
认识论
工程类
作者
Zhengwei Zhai,Rongli Fan,Jie Huang,Naixue Xiong,Lijuan Zhang,Jian Wan,Lei Zhang
标识
DOI:10.1016/j.csl.2024.101643
摘要
Relational triple extraction is a critical step in knowledge graph construction. Compared to pipeline-based extraction, joint extraction is gaining more attention because it can better utilize entity and relation information without causing error propagation issues. Yet, the challenge with joint extraction lies in handling overlapping triples. Existing approaches adopt sequential steps or multiple modules, which often accumulate errors and interfere with redundant data. In this study, we propose an innovative joint extraction model with cross-attention mechanism and global pointers with context shield window. Specifically, our methodology begins by inputting text data into a pre-trained RoBERTa model to generate word vector representations. Subsequently, these embeddings are passed through a modified cross-attention layer along with entity type embeddings to address missing entity type information. Next, we employ the global pointer to transform the extraction problem into a quintuple extraction problem, which skillfully solves the issue of overlapping triples. It is worth mentioning that we design a context shield window on the global pointer, which facilitates the identification of correct entities within a limited range during the entity extraction process. Finally, the capability of our model against malicious samples is improved by adding adversarial training during the training process. Demonstrating superiority over mainstream models, our approach achieves impressive results on three publicly available datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI