内容有效性
等级间信度
卡帕
科恩卡帕
心理学
内容分析
统计
内容(测量理论)
临床心理学
心理测量学
数学
评定量表
发展心理学
社会科学
数学分析
社会学
几何学
作者
Christine A. Wynd,Bruce Schmidt,Michelle Atkins Schaefer
标识
DOI:10.1177/0193945903252998
摘要
Instrument content validity is often established through qualitative expert reviews, yet quantitative analysis of reviewer agreements is also advocated in the literature.Two quantitative approaches to content validity estimations were compared and contrasted using a newly developed instrument called the Osteoporosis Risk Assessment Tool (ORAT).Data obtained from a panel of eight expert judges were analyzed. A Content Validity Index (CVI) initially determined that only one item lacked interrater proportion agreement about its relevance to the instrument as a whole (CVI = 0.57). Concern that higher proportion agreement ratings might be due to random chance stimulated further analysis using a multirater kappa coefficient of agreement. An additional seven items had low kappas, ranging from 0.29 to 0.48 and indicating poor agreement among the experts. The findings supported the elimination or revision of eight items. Pros and cons to using both proportion agreement and kappa coefficient analysis are examined.
科研通智能强力驱动
Strongly Powered by AbleSci AI