可比性
比例(比率)
读写能力
人工智能
心理学
计算机科学
数学教育
教育学
数学
量子力学
组合数学
物理
作者
Matthias Carl Laupichler,Alexandra Aster,Jan-Ole Perschewski,Johannes Schleiss
出处
期刊:Education Sciences
[Multidisciplinary Digital Publishing Institute]
日期:2023-09-26
卷期号:13 (10): 978-978
被引量:20
标识
DOI:10.3390/educsci13100978
摘要
A growing number of courses seek to increase the basic artificial-intelligence skills (“AI literacy”) of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable quality assurance and comparability. In this study, a validated AI-literacy-assessment instrument, the “scale for the assessment of non-experts’ AI literacy” (SNAIL) was adapted and used to evaluate an undergraduate AI course. We investigated whether the scale can be used to reliably evaluate AI courses and whether mediator variables, such as attitudes toward AI or participation in other AI courses, had an influence on learning gains. In addition to the traditional mean comparisons (i.e., t-tests), the comparative self-assessment (CSA) gain was calculated, which allowed for a more meaningful assessment of the increase in AI literacy. We found preliminary evidence that the adapted SNAIL questionnaire enables a valid evaluation of AI-learning gains. In particular, distinctions among different subconstructs and the differentiation constructs, such as attitudes toward AI, seem to be possible with the help of the SNAIL questionnaire.
科研通智能强力驱动
Strongly Powered by AbleSci AI