可信赖性
计算机科学
医学知识
领域(数学分析)
领域知识
自然语言处理
数据科学
知识管理
医学
计算机安全
医学教育
数学分析
数学
作者
Haochun Wang,Sendong Zhao,Zewen Qiang,Zijian Li,Chi Liu,Nuwa Xi,Yanrui Du,Bing Qin,Ting Liu
摘要
Large Language Models (LLMs) have demonstrated remarkable success in diverse natural language processing (NLP) tasks in general domains. However, LLMs sometimes generate responses with the hallucination about medical facts due to limited domain knowledge. Such shortcomings pose potential risks in the utilization of LLMs within medical contexts. To address this challenge, we propose knowledge-tuning, which leverages structured medical knowledge bases for the LLMs to grasp domain knowledge efficiently and facilitate trustworthy response generation. We also release cMedKnowQA, a Chinese medical knowledge question-answering dataset constructed from medical knowledge bases to assess the medical knowledge proficiency of LLMs. Experimental results show that the LLMs which are knowledge-tuned with cMedKnowQA, can exhibit higher levels of accuracy in response generation compared with vanilla instruction-tuning and offer a new trustworthy way for the domain adaptation of LLMs. We release our code and data at https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese .
科研通智能强力驱动
Strongly Powered by AbleSci AI