可靠性(半导体)
统计
相关性
度量(数据仓库)
协议
心理学
等级间信度
天花板(云)
天花板效应
读写能力
社会心理学
计量经济学
数学
计算机科学
评定量表
语言学
地理
数据挖掘
教育学
物理
几何学
替代医学
功率(物理)
气象学
病理
哲学
医学
量子力学
作者
John R. Hayes,Jill A. Hatch
标识
DOI:10.1177/0741088399016003004
摘要
In many literacy studies, it is important to establish the reliability of independent observers' judgments. Reliability most commonly is measured either by the percentage of agreement or the correlation between the observers' judgments. This article argues that the percentage of agreement measure is more difficult to interpret than are correlation measures because of the following: (a) the effects of chance agreement are not accounted for automatically by the percentage of agreement measure; and (b) rates of chance agreement are strongly influenced by the variability of the data, by “ceiling” and “floor” effects, and by the scoring of near agreement as perfect agreement. For these reasons, the authors recommend that the field of literacy research adopt correlation as the standard method for estimating the reliability of observers' judgments.
科研通智能强力驱动
Strongly Powered by AbleSci AI