心理学
心理信息
判别效度
可靠性(半导体)
应用心理学
收敛有效性
清晰
优势和劣势
心理测量学
人员选择
测试有效性
社会心理学
临床心理学
统计
梅德林
功率(物理)
生物化学
物理
化学
数学
量子力学
政治学
法学
内部一致性
作者
Joshua P. Liff,Nathan J. Mondragon,Cari Gardner,Christopher J. Hartwell,Adam L. Bradshaw
摘要
Interviews are one of the most widely used selection methods, but their reliability and validity can vary substantially. Further, using human evaluators to rate an interview can be expensive and time consuming. Interview scoring models have been proposed as a mechanism for reliably, accurately, and efficiently scoring video-based interviews. Yet, there is a lack of clarity and consensus around their psychometric characteristics, primarily driven by a dearth of published empirical research. The goal of this study was to examine the psychometric properties of automated video interview competency assessments (AVI-CAs), which were designed to be highly generalizable (i.e., apply across job roles and organizations). The AVI-CAs developed demonstrated high levels of convergent validity (average r value of .66), moderate discriminant relationships (average r value of .58), good test-retest reliability (average r value of .72), and minimal levels of subgroup differences (Cohen's ds ≥ -.14). Further, criterion-related validity (uncorrected sample-weighted r¯ = .24) was demonstrated by applying these AVI-CAs to five organizational samples. Strengths, weaknesses, and future directions for building interview scoring models are also discussed. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
科研通智能强力驱动
Strongly Powered by AbleSci AI