形成性评价
计算机科学
考试(生物学)
群(周期表)
人机交互
数学教育
软件工程
心理学
生物
古生物学
有机化学
化学
作者
Qiang Hao,Jack Wilson,Camille Ottaway,Naitra Iriumi,Kai Arakawa,David H. Smith
标识
DOI:10.1109/vlhcc.2019.8818922
摘要
This study investigated the essential of meaningful automated feedback for programming assignments. Three different types of feedback were tested, including (a) What's wrong - what test cases were testing and which failed, (b) Gap - comparisons between expected and actual outputs, and (c) Hint - hints on how to fix problems if test cases failed. 46 students taking a CS2 participated in this study. They were divided into three groups, and the feedback configurations for each group were different: (1) Group One - What's wrong, (2) Group Two - What's wrong + Gap, (3) Group Three - What's wrong + Gap + Hint. This study found that simply knowing what failed did not help students sufficiently, and might stimulate system gaming behavior. Hints were not found to be impactful on student performance or their usage of automated feedback. Based on the findings, this study provides practical guidance on the design of automated feedback.
科研通智能强力驱动
Strongly Powered by AbleSci AI