弹丸
计算机科学
关系(数据库)
关系抽取
人工智能
萃取(化学)
一次性
自然语言处理
数据挖掘
色谱法
工程类
化学
机械工程
有机化学
作者
Xiaoyan Zhao,Min Yang,Qiang Qu,Ruifeng Xu
标识
DOI:10.1109/tnnls.2024.3365858
摘要
Relation extraction (RE) tends to struggle when the supervised training data is few and difficult to be collected. In this article, we elicit relational and factual knowledge from large pretrained language models (PLMs) for few-shot RE (FSRE) with prompting techniques. Concretely, we automatically generate a diverse set of natural language templates and modulate PLM's behavior through these prompts for FSRE. To mitigate the template bias which leads to unstableness of few-shot learning, we propose a simple yet effective template regularization network (TRN) to prevent deep networks from over-fitting uncertain templates and thus stabilize the FSRE models. TRN alleviates the template bias with three mechanisms: 1) an attention mechanism over mini-batch to weight each template; 2) a ranking regularization mechanism to regularize the attention weights and constrain the importance of uncertain templates; and 3) a template calibration module with two calibrating techniques to modify the uncertain templates in the lowest-ranked group. Experimental results on two benchmark datasets (i.e., FewRel and NYT ) show that our model has robust superiority over strong competitors. For reproducibility, we will release our code and data upon the publication of this article.
科研通智能强力驱动
Strongly Powered by AbleSci AI