计算机科学
卡尔曼滤波器
人工智能
背景(考古学)
机器学习
大数据
算法
数据挖掘
古生物学
生物
作者
Xingchi Chen,MA Wen-xin,Dazhou Li,Fa Zhu,Sidheswar Routray,Manisha Guduri,Martin Margala
标识
DOI:10.1109/jbhi.2025.3527340
摘要
The progress in Natural Language Processing (NLP) using Large Language Models (LLMs) has greatly improved medical sentiment analysis of patient feedback extraction from health-related question and answer. However, using LLMs to analyze such data often requires significant training data and computational resources, resulting in considerable increases in training costs and durations, which is one of the primary issues in applying LLMs to real-world healthcare scenarios. To tackle these challenges, a novel optimization algorithm named KAdam-EnGPT4LLM, based on Kalman filters and Adaptive Moment Estimation, is proposed to enhance training efficiency and reduce training costs of LLMs for analyzing patient feedback sentiment. Furthermore, the optimization algorithm KAdam-EnGPT4LLM is employed in training the LLM model GPT4ALL for medical sentiment analysis, resulting in the development of GPT4ALL-MediSentAly-KAdam, which leds to faster convergence and more stable training specifically for medical questions and answer in the context of healthcare. The results show that our GPT4ALL-MediSentAly-KAdam with the optimization algorithm KAdam-EnGPT4LLM achieved better performance that include the best Accuracy, Recall, F1-score, and Runtime for both datasets, outperforming traditional fine-tuned LLMs such as the classic GPT4ALL, Ada, Babbage, Curie, and Duvinci.
科研通智能强力驱动
Strongly Powered by AbleSci AI