计算机科学
答疑
Hop(电信)
医学知识
自然语言处理
人工智能
情报检索
数据科学
计算机网络
医学
医学教育
作者
Jing Chen,Zhihua Wei,Wen Shen,Rui Shang
标识
DOI:10.1109/jbhi.2025.3547444
摘要
MedQA-USMLE is a challenging biomedical question answering (BQA) task, as its questions typically involve multi-hop reasoning. To solve this task, BQA systems should possess substantial medical professional knowledge and strong medical reasoning capabilities. While state-of-the-art larger language models, such as Med-PaLM 2, have overcome this challenge, smaller language models (SLMs) still struggle with it. To bridge this gap, we introduces a multi-hop medical knowledge infusion (MHMKI) procedure to endow SLMs with medical reasoning capabilities. Specifically, we categorize MedQA-USMLE questions into distinct reasoning types, then create pre-training instances tailored to each type of questions with the semi-structured information and hyperlinks of Wikipedia articles. To enable SLMs to efficiently capture the multi-hop knowledge embedded in these instances, we design a reasoning chain masked language model for further pre-training of BERT models. Moreover, we transform these pre-training instances into a combined question answering dataset for intermediate fine-tuning of GPT models. We evaluate MHMKI with six SLMs (three BERT models and three GPT models) across five datasets spanning three BQA tasks. Results show that MHMKI benefits SLMs in nearly all tasks, especially those requiring multi-hop reasoning. For instance, the accuracy of MedQA-USMLE shows a significant increase of 5.3% on average.
科研通智能强力驱动
Strongly Powered by AbleSci AI