语言学
传统医学
计算机科学
医学
自然语言处理
心理学
哲学
作者
Zhe Chen,Hui Wang,Chengxian Li,Chunxaing Liu,Fengwen Yang,Dong Zhang,Alice Josephine Fauci,Junhua Zhang
标识
DOI:10.1097/hm9.0000000000000143
摘要
Objective: Generative artificial intelligence (AI) technology, represented by large language models (LLMs), has gradually been developed for traditional Chinese medicine (TCM); however, challenges remain in effectively enhancing AI applications for TCM. Therefore, this study is the first systematic review to analyze LLMs in TCM retrospectively, focusing on and summarizing the evidence of their performance in generative tasks. Methods: We extensively searched electronic databases for articles published until June 2024 to identify publicly available studies on LLMs in TCM. Two investigators independently selected and extracted the related information and evaluation metrics. Based on the available data, this study employed descriptive analysis for a comprehensive systematic review of LLM technology related to TCM. Results: Ten studies published between 2023 and 2024 met our eligibility criteria and were included in this review, including 40% LLMs in the TCM vertical domain, 40% containing TCM data, and 20% honoring the TCM contribution, with a foundational model parameter range from 1.8 to 33 billion. All included studies used manual or automatic evaluation metrics to evaluate model performance and fully discussed the challenges and contributions through an overview of LLMs in TCM. Conclusions: LLMs have achieved significant advantages in TCM applications and can effectively address intelligent TCM tasks. Further in-depth development of LLMs is needed in various vertical TCM fields, including clinical and fundamental research. Focusing on the functional segmentation development direction of generative AI technologies in TCM application scenarios to meet the practical needs-oriented demands of TCM digitalization is essential. Graphical abstract: http://links.lww.com/AHM/A152.
科研通智能强力驱动
Strongly Powered by AbleSci AI