导师
清晰
同行反馈
计算机科学
医学教育
课程
质量(理念)
形成性评价
医学
心理学
数学教育
教育学
生物化学
认识论
哲学
化学
作者
Majid Ali,Ihab Harbieh,Khawaja Husnain Haider
标识
DOI:10.1080/0142159x.2025.2519639
摘要
Timely, high-quality feedback is vital in medical education but increasingly difficult due to rising student numbers and limited faculty. Artificial intelligence (AI) tools offer scalable solutions, yet limited research compares their effectiveness with traditional tutor feedback. This study examined the comparative effectiveness of AI-generated feedback versus human tutor feedback within the medical curriculum. Second-year medical students (n = 108) received two sets of feedback on a written assignment, one from their tutor and one unedited response from ChatGPT. Students assessed each feedback using a structured online questionnaire focused on key feedback quality criteria. Eighty-five students (79%) completed the evaluation. Tutor feedback was rated significantly higher in clarity and understandability (p < 0.001), relevance (p < 0.001), actionability (p = 0.009), comprehensiveness (p = 0.001), accuracy and reliability (p = 0.003), and overall usefulness (p < 0.001). However, 62.3% of students indicated that both pieces of feedback complemented each other. Open-ended responses aligned with these quantitative findings. . Human tutors currently provide superior feedback in terms of clarity, relevance, and accuracy. Nonetheless, AI-generated feedback shows promise as a complementary tool. A hybrid feedback model integrating AI and human input could enhance the scalability and richness of feedback in medical education.
科研通智能强力驱动
Strongly Powered by AbleSci AI