计算机科学
规范化(社会学)
语言模型
自然语言处理
人工智能
同义词(分类学)
水准点(测量)
任务(项目管理)
社会学
属
经济
生物
植物
管理
人类学
地理
大地测量学
作者
Zhaohong Lai,Bojie Fu,Shangfei Wei,Xiaodong Shi
标识
DOI:10.1007/978-3-031-17189-5_5
摘要
Biomedical entity normalization (BEN) aims to link the entity mentions in a biomedical text to referent entities in a knowledge base. Recently, the paradigm of large-scale language model pre-training and fine-tuning have achieved superior performance in BEN task. However, pre-trained language models like SAPBERT [21] typically contain hundreds of millions of parameters, and fine-tuning all parameters is computationally expensive. The latest research such as prompt technology is proposed to reduce the amount of parameters during the model training. Therefore, we propose a framework Prompt-BEN using continuous Prompt to enhance BEN, which just needs to fine-tune few parameters of prompt. Our method employs embeddings with the continuous prefix prompt to capture the semantic similarity between mention and terms. We also design a contrastive loss with synonym marginalized strategy for the BEN task. Finally, experimental results on three benchmark datasets demonstrated that our method achieves competitive or even greater linking accuracy than the state-of-the-art fine-tuning-based models while having about 600 times fewer tuned parameters.
科研通智能强力驱动
Strongly Powered by AbleSci AI