印为红字的
计算机科学
自然语言处理
心理学
医学教育
数学教育
医学
标识
DOI:10.1080/0142159x.2025.2504106
摘要
Large language models (LLMs) show promise in medical education. This study examines LLMs' ability to score post-encounter notes (PNs) from Objective Structured Clinical Examinations (OSCEs) using an analytic rubric. The goal was to evaluate and refine methods for accurate, consistent scoring. Seven LLMs scored five PNs representing varying levels of performance, including an intentionally incorrect PN. An iterative experimental design tested different prompting strategies and temperature settings, a parameter controlling LLM response creativity. Scores were compared to expected rubric-based results. Consistently accurate scoring required multiple rounds of prompt refinement. Simple prompting led to high variability, which improved with structured approaches and low-temperature settings. LLMs occasionally made errors calculating total scores, necessitating external calculation. The final approach yielded consistently accurate scores across all models. LLMs can reliably apply analytic rubrics to PNs with careful prompt engineering and process refinement. This study illustrates their potential as scalable, automated scoring tools in medical education, though further research is needed to explore their use with holistic rubrics. These findings demonstrate the utility of LLMs in assessment practices.
科研通智能强力驱动
Strongly Powered by AbleSci AI