文字2vec
计算机科学
命名实体识别
水准点(测量)
背景(考古学)
人工智能
自然语言处理
语言模型
机器学习
任务(项目管理)
古生物学
管理
大地测量学
嵌入
经济
生物
地理
作者
Yalong Xie,Aiping Li,Chongfu Zhong
标识
DOI:10.1145/3512576.3512603
摘要
Named entity recognition (NER) is a step stone for numerous downstream applications, and medical NER is an important part of NER. Prior studies have applied various pre-trained language models (PLMs) to medical NER, but they ignore to systematically investigate cons and pros of these PLMs. In this paper, we investigate cons and pros of prevalent PLMs in medical NER. To be specific, we first pre-train three PLMs (i.e., word2vec, GloVe and ELMo) from scratch and fine-tune Chinese BERT model with 300k entries of real-world Chinese Electronic Medical Records. Then, we combine above PLMs with BiLSTM-CRF to evaluate effects of these PLMs. Experimental results on CCKS2019 dataset show that context-dependent PLMs (ELMo and BERT) significantly outperform con-text-independent PLMs (word2vec and GloVe) in medical NER, by up to 4.98% absolute F1 gains. Moreover, our best model achieves new state-of-the-art results on this benchmark dataset. Furthermore, we make additional analyses from perspectives of time and space complexity.
科研通智能强力驱动
Strongly Powered by AbleSci AI