计算机科学
生成语法
背景(考古学)
关系(数据库)
关系抽取
萃取(化学)
人工智能
生成模型
机器学习
数据挖掘
色谱法
古生物学
化学
生物
作者
Zhenbin Chen,Zhixin Li,Yufei Zeng,Canlong Zhang,Huifang Ma
标识
DOI:10.1016/j.eswa.2024.123478
摘要
Prompt-tuning was proposed to bridge the gap between pretraining and downstream tasks, and it has achieved promising results in Relation Extraction (RE) tasks in recent years. Although the existing prompt-based RE methods have outperformed the methods based on the fine-tuning paradigm, these methods require domain experts to design prompt templates, making them hard to generalize. In this paper, we proposed a Generative context-Aware Prompt-tuning method (GAP) to address these limitations. Our method consists of three crucial modules: (1) a pretrained prompt generator module that extracts or generates the relation triggers from the context and embeds them into the prompt tokens, (2) an in-domain adaptive pretraining module that further trains the Pretrained Language Models (PLMs) to promote the adaptability of the model, and (3) a joint contrastive loss that prevents PLMs from generating results unrelated to the relation labels while optimizing our model more effectively. We observed that the context-enhanced prompt tokens generated by GAP can better guide PLMs to make accurate relationship predictions. And the in-domain pretraining can effectively inject domain knowledge to enhance the robustness of the model. We conduct experiments on four public RE datasets under the supervised and few-shot settings. The experimental results have demonstrated the superiority of GAP over existing benchmark methods and GAP shows remarkable improvements in few-shot settings, with average F1 score enhancements of 3.5%, 2.7%, and 3.4% on the TACRED, TACREV, and Re-TACRED datasets, respectively. Furthermore, GAP still achieved state-of-the-art (SOTA) performance in supervised settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI