计算机科学
标杆管理
数据科学
自然语言处理
人工智能
营销
业务
作者
Qingyu Chen,Yan Hu,Xueqing Peng,Qianqian Xie,Qiao Jin,Aidan Gilson,Maxwell Singer,X. C. Ai,Po-Ting Lai,Zhizheng Wang,Vipina K. Keloth,Kalpana Raja,Jimin Huang,Huan He,Fongci Lin,Jingcheng Du,Rui Zhang,W. Jim Zheng,Ron A. Adelman,Zhiyong Lu
标识
DOI:10.1038/s41467-025-56989-2
摘要
The rapid growth of biomedical literature poses challenges for manual knowledge curation and synthesis. Biomedical Natural Language Processing (BioNLP) automates the process. While Large Language Models (LLMs) have shown promise in general domains, their effectiveness in BioNLP tasks remains unclear due to limited benchmarks and practical guidelines. We perform a systematic evaluation of four LLMs, GPT and LLaMA representatives on 12 BioNLP benchmarks across six applications. We compare their zero-shot, few-shot, and fine-tuning performance with traditional fine-tuning of BERT or BART models. We examine inconsistencies, missing information, hallucinations, and perform cost analysis. Here we show that traditional fine-tuning outperforms zero or few shot LLMs in most tasks. However, closed-source LLMs like GPT-4 excel in reasoning-related tasks such as medical question answering. Open source LLMs still require fine-tuning to close performance gaps. We find issues like missing information and hallucinations in LLM outputs. These results offer practical insights for applying LLMs in BioNLP.
科研通智能强力驱动
Strongly Powered by AbleSci AI