情报检索
计算机科学
荟萃分析
麻醉学
人口
系统回顾
梅德林
医学
内科学
病理
环境卫生
政治学
法学
作者
Alessandro De Cassai,Burhan Dost,Yunus Emre Karapınar,Müzeyyen Beldağlı,Mirac Selcen Ozkal Yalin,Esra Turunç,Engin İhsan Turan,Nicolò Sella
标识
DOI:10.1136/rapm-2024-106231
摘要
Background This study evaluated the effectiveness of large language models (LLMs), specifically ChatGPT 4o and a custom-designed model, Meta-Analysis Librarian, in generating accurate search strings for systematic reviews (SRs) in the field of anesthesiology. Methods We selected 85 SRs from the top 10 anesthesiology journals, according to Web of Science rankings, and extracted reference lists as benchmarks. Using study titles as input, we generated four search strings per SR: three with ChatGPT 4o using general prompts and one with the Meta-Analysis Librarian model, which follows a structured, Population, Intervention, Comparator, Outcome-based approach aligned with Cochrane Handbook standards. Each search string was used to query PubMed, and the retrieved results were compared with the PubMed retrieved studies from the original search string in each SR to assess retrieval accuracy. Statistical analysis compared the performance of each model. Results Original search strings demonstrated superior performance with a 65% (IQR: 43%–81%) retrieval rate, which was statistically different from both LLM groups in PubMed retrieved studies (p=0.001). The Meta-Analysis Librarian achieved a superior median retrieval rate to ChatGPT 4o (median, (IQR); 24% (13%–38%) vs 6% (0%–14%), respectively). Conclusion The findings of this study highlight the significant advantage of using original search strings over LLM-generated search strings in PubMed retrieval studies. The Meta-Analysis Librarian demonstrated notable superiority in retrieval performance compared with ChatGPT 4o. Further research is needed to assess the broader applicability of LLM-generated search strings, especially across multiple databases.
科研通智能强力驱动
Strongly Powered by AbleSci AI