等级间信度
内容(测量理论)
内容有效性
心理学
计算机科学
临床心理学
心理测量学
发展心理学
评定量表
数学
数学分析
作者
Jean‐Charles Pillet,Kai R. Larsen,David G. Dobolyi,Magno Queiroz,Abram Handler,Jan Ketil Arnulf,Rajeev Sharma
标识
DOI:10.25300/misq/2025/18946
摘要
Content validation is an essential aspect of the scale development process that ensures that measurement instruments capture their intended constructs. However, researchers rarely undertake this core step in behavioral research because it requires costly data collection and specialized expertise. We present RATER (Replicable Approach to Expert Ratings), a free web-based system (www.contval.org) that can help the broader research community (scientists, reviewers, students) gain quick and reliable insights into the content validity of measurement instruments. Guided by psychometric measurement theory, RATER evaluates whether a scale’s items correspond to their intended construct, remain distinct from other constructs, and adequately represent all aspects of the construct’s content domain. The system employs two unique artificial intelligence models, RATERC and RATERD, which leverage psychometric scales from 2,443 journal articles spanning eight disciplines and two state-of-the-art large language model architectures (i.e., BERT and GPT). A set of six complementary studies confirms the RATER system’s accuracy, reliability, and usefulness. We find RATER can augment the scale development and validation process, increasing the validity of findings in behavioral research.
科研通智能强力驱动
Strongly Powered by AbleSci AI