考试(生物学)
拉什模型
课程
数学教育
心理干预
读写能力
心理学
可靠性(半导体)
集合(抽象数据类型)
质量(理念)
项目分析
项目反应理论
医学教育
应用心理学
计算机科学
心理测量学
发展心理学
教育学
医学
古生物学
功率(物理)
物理
哲学
认识论
量子力学
精神科
生物
程序设计语言
作者
Thomas K. F. Chiu,Yifan Chen,King Woon Yau,Ching Sing Chai,Helen Meng,Irwin King,Savio W.H. Wong,Yeung Yam
标识
DOI:10.1016/j.caeai.2024.100282
摘要
The majority of AI literacy studies have designed and developed self-reported questionnaires to assess AI learning and understanding. These studies assessed students' perceived AI capability rather than AI literacy because self-perceptions are seldom an accurate account of true measures. International assessment programs that use objective measures to assess science, mathematical, digital, and computational literacy back up this argument. Furthermore, because AI education research is still in its infancy, the current definition of AI literacy in the literature may not meet the needs of young students. Therefore, this study aims to develop and validate an AI literacy test for school students within the interdisciplinary project known as AI4future. Engineering and education researchers created and selected 25 multiple-choice questions to accomplish this goal, and school teachers validated them while developing an AI curriculum for middle schools. 2,390 students in grades 7 to 9 took the test. We used a Rasch model to investigate the discrimination, reliability, and validity of the items. The results showed that the model met the unidimensionality assumption and demonstrated a set of reliable and valid items. They indicate the quality of the test items. The test enables AI education researchers and practitioners to appropriately evaluate their AI-related education interventions.
科研通智能强力驱动
Strongly Powered by AbleSci AI