形成性评价
计算机科学
可扩展性
比例(比率)
班级(哲学)
质量(理念)
数学教育
数据科学
人工智能
心理学
量子力学
数据库
认识论
物理
哲学
作者
Rachel Van Campenhout,Nick Brown,Bill Jerome,Jeffrey S. Dittel,Benny G. Johnson
标识
DOI:10.1145/3430895.3460162
摘要
Courseware is a comprehensive learning environment that engages students in a learning by doing approach while also giving instructors data-driven insights on their class, providing a scalable solution for many instructional models. However, courseware-and the volume of formative questions required to make it effective-is time-consuming and expensive to create. By using artificial intelligence for automatic question generation, we can reduce the time and cost of developing formative questions in courseware. However, it is critical that automatically generated (AG) questions have a level of quality on par with human-authored (HA) questions in order to be confident in their usage at scale. Therefore, our research question is: are student interactions with AG questions equivalent to HA questions with respect to engagement, difficulty, and persistence metrics? This paper evaluates data for AG and HA questions that students used as formative practice in their university Communication course. Analysis of AG and HA questions shows that our first generation of AG questions perform equally well as HA questions in multiple important respects.
科研通智能强力驱动
Strongly Powered by AbleSci AI