计算机科学
估计员
推论
样本量测定
数据挖掘
光学(聚焦)
样品(材料)
I类和II类错误
贝叶斯推理
星团(航天器)
特征(语言学)
贝叶斯概率
机器学习
统计
计量经济学
人工智能
数学
物理
哲学
光学
化学
色谱法
程序设计语言
语言学
作者
Daniel McNeish,Laura M. Stapleton
标识
DOI:10.1080/00273171.2016.1167008
摘要
Small-sample inference with clustered data has received increased attention recently in the methodological literature, with several simulation studies being presented on the small-sample behavior of many methods. However, nearly all previous studies focus on a single class of methods (e.g., only multilevel models, only corrections to sandwich estimators), and the differential performance of various methods that can be implemented to accommodate clustered data with very few clusters is largely unknown, potentially due to the rigid disciplinary preferences. Furthermore, a majority of these studies focus on scenarios with 15 or more clusters and feature unrealistically simple data-generation models with very few predictors. This article, motivated by an applied educational psychology cluster randomized trial, presents a simulation study that simultaneously addresses the extreme small sample and differential performance (estimation bias, Type I error rates, and relative power) of 12 methods to account for clustered data with a model that features a more realistic number of predictors. The motivating data are then modeled with each method, and results are compared. Results show that generalized estimating equations perform poorly; the choice of Bayesian prior distributions affects performance; and fixed effect models perform quite well. Limitations and implications for applications are also discussed.
科研通智能强力驱动
Strongly Powered by AbleSci AI