操作化
计算机化自适应测验
考试(生物学)
项目反应理论
前提
比例(比率)
计算机科学
试验设计
可靠性(半导体)
心理学
应用心理学
可靠性工程
心理测量学
统计
试验方法
数学
工程类
临床心理学
古生物学
哲学
语言学
物理
功率(物理)
认识论
量子力学
生物
作者
Okan Bulut,Guher Gorgun,Hacer Karamese
摘要
Abstract The use of multistage adaptive testing (MST) has gradually increased in large‐scale testing programs as MST achieves a balanced compromise between linear test design and item‐level adaptive testing. MST works on the premise that each examinee gives their best effort when attempting the items, and their responses truly reflect what they know or can do. However, research shows that large‐scale assessments may suffer from a lack of test‐taking engagement, especially if they are low stakes. Examinees with low test‐taking engagement are likely to show noneffortful responding (e.g., answering the items very rapidly without reading the item stem or response options). To alleviate the impact of noneffortful responses on the measurement accuracy of MST, test‐taking engagement can be operationalized as a latent trait based on response times and incorporated into the on‐the‐fly module assembly procedure. To demonstrate the proposed approach, a Monte‐Carlo simulation study was conducted based on item parameters from an international large‐scale assessment. The results indicated that the on‐the‐fly module assembly considering both ability and test‐taking engagement could minimize the impact of noneffortful responses, yielding more accurate ability estimates and classifications. Implications for practice and directions for future research were discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI