计算机科学
稳健性(进化)
人工智能
命名实体识别
自然语言处理
嵌入
实体链接
学习迁移
试验装置
领域(数学分析)
相似性(几何)
特征(语言学)
模式识别(心理学)
数学
图像(数学)
语言学
任务(项目管理)
知识库
管理
化学
经济
哲学
数学分析
基因
生物化学
作者
Wei Li,Hui Li,Jingguo Ge,Lei Zhang,Liangxiong Li,Bingzhen Wu
标识
DOI:10.1109/ijcnn54540.2023.10191439
摘要
Few-shot Named Entity Recognition (NER) aims to recognize unseen name entities based on a tiny support set that consists of seen name entities and labels, which is obviously different from traditional supervised NER methods. Contrastive learning has become a popular solution for few-shot NER, which improves the robustness of NER to handle unlabeled entities by learning a similarity metric to measure the semantic similarity between test samples and entity labels. However, existing contrastive learning based NER methods individually learn the word embedding in source and target domains, ignoring connections between entities with the same label and limiting the effectiveness of contrast learning. In this paper, we propose a novel few-shot NER framework that jointly models different domain texts and optimizes a generalized objective of differentiating between words in all stages. The proposed model builds the cross-domain attention layer to enhance the feature representations of words and transfer the entity similarity information from the source domain to the target domain. This significantly reduces the divergence between entities with same label. Experimental results on the largest Few-shot NER dataset show that CDANER significantly outperforms all baseline methods, which verifies the effectiveness and robustness of the proposed model.
科研通智能强力驱动
Strongly Powered by AbleSci AI