计算机科学
同行反馈
生成语法
过程(计算)
特征(语言学)
自然语言
质量(理念)
生成模型
自然(考古学)
人工智能
探测器
人机交互
数学教育
心理学
历史
认识论
操作系统
哲学
考古
电信
语言学
作者
Stephen Hutt,Allison DePiro,Joann Wang,Sam Rhodes,Ryan S. Baker,Grayson Hieb,Sheela Sethuraman,Jaclyn Ocumpaugh,Caitlin Mills
标识
DOI:10.1145/3636555.3636850
摘要
Peer feedback can be a powerful tool as it presents learning opportunities for both the learner receiving feedback as well as the learner providing feedback. Despite its utility, it can be difficult to implement effectively, particularly for younger learners, who are often novices at providing feedback. It can be difficult for students to learn what constitutes "good" feedback – particularly in open-ended problem-solving contexts. To address this gap, we investigate both classical natural language processing techniques and large language models, specifically ChatGPT, as potential approaches to devise an automated detector of feedback quality (including both student progress towards goals and next steps needed). Our findings indicate that the classical detectors are highly accurate and, through feature analysis, we elucidate the pivotal elements influencing its decision process. We find that ChatGPT is less accurate than classical NLP but illustrate the potential of ChatGPT in evaluating feedback, by generating explanations for ratings, along with scores. We discuss how the detector can be used for automated feedback evaluation and to better scaffold peer feedback for younger learners.
科研通智能强力驱动
Strongly Powered by AbleSci AI