质量(理念)
构造(python库)
迪亚德
计算机科学
管理科学
考试(生物学)
心理干预
决策辅助工具
干预(咨询)
心理学
应用心理学
医学
工程类
古生物学
哲学
化学
有机化学
认识论
精神科
共聚物
生物
程序设计语言
聚合物
替代医学
病理
作者
Jeffrey C. Valentine,Harris Cooper
出处
期刊:Psychological Methods
[American Psychological Association]
日期:2008-06-01
卷期号:13 (2): 130-149
被引量:172
标识
DOI:10.1037/1082-989x.13.2.130
摘要
Assessments of studies meant to evaluate the effectiveness of interventions, programs, and policies can serve an important role in the interpretation of research results. However, evidence suggests that available quality assessment tools have poor measurement characteristics and can lead to opposing conclusions when applied to the same body of studies. These tools tend to (a) be insufficiently operational, (b) rely on arbitrary post-hoc decision rules, and (c) result in a single number to represent a multidimensional construct. In response to these limitations, a multilevel and hierarchical instrument was developed in consultation with a wide range of methodological and statistical experts. The instrument focuses on the operational details of studies and results in a profile of scores instead of a single score to represent study quality. A pilot test suggested that satisfactory between-judge agreement can be obtained using well-trained raters working in naturalistic conditions. Limitations of the instrument are discussed, but these are inherent in making decisions about study quality given incomplete reporting and in the absence of strong, contextually based information about the effects of design flaws on study outcomes.
科研通智能强力驱动
Strongly Powered by AbleSci AI