差异项目功能
项目反应理论
正规化(语言学)
统计
潜变量
测量不变性
样本量测定
I类和II类错误
数学
计量经济学
人工智能
机器学习
计算机科学
心理测量学
结构方程建模
验证性因素分析
作者
William C. M. Belzak,Daniel J. Bauer
出处
期刊:Psychological Methods
[American Psychological Association]
日期:2020-12-01
卷期号:25 (6): 673-690
被引量:31
摘要
A common challenge in the behavioral sciences is evaluating measurement invariance, or whether the measurement properties of a scale are consistent for individuals from different groups. Measurement invariance fails when differential item functioning (DIF) exists, that is, when item responses relate to the latent variable differently across groups. To identify DIF in a scale, many data-driven procedures iteratively test for DIF one item at a time while assuming other items have no DIF. The DIF-free items are used to anchor the scale of the latent variable across groups, identifying the model. A major drawback to these iterative testing procedures is that they can fail to select the correct anchor items and identify true DIF, particularly when DIF is present in many items. We propose an alternative method for selecting anchors and identifying DIF. Namely, we use regularization, a machine learning technique that imposes a penalty function during estimation to remove parameters that have little impact on the fit of the model. We focus specifically here on a lasso penalty for group differences in the item parameters within the two-parameter logistic item response theory model. We compare lasso regularization with the more commonly used likelihood ratio test method in a 2-group DIF analysis. Simulation and empirical results show that when large amounts of DIF are present and sample sizes are large, lasso regularization has far better control of Type I error than the likelihood ratio test method with little decrement in power. This provides strong evidence that lasso regularization is a promising alternative for testing DIF and selecting anchors. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
科研通智能强力驱动
Strongly Powered by AbleSci AI