医学
双雷达
卡帕
科恩卡帕
乳房成像
一致性(知识库)
乳房密度
再现性
分类
乳腺摄影术
核医学
比例(比率)
放射科
医学物理学
乳腺癌
统计
人工智能
数学
癌症
内科学
计算机科学
物理
量子力学
几何学
作者
Stefano Ciatto,Nehmat Houssami,A. Apruzzese,E. Bassetti,Beniamino Brancato,Francesca Carozzi,S Catarzi,M.P. Lamberini,G Marcelli,R. Pellizzoni,Bárbara Pesce,Gabriella Risso,Filippo Russo,A. Scorsolini
出处
期刊:The Breast
[Elsevier BV]
日期:2005-08-01
卷期号:14 (4): 269-275
被引量:211
标识
DOI:10.1016/j.breast.2004.12.004
摘要
The inter- and intraobserver agreement (kappa-statistic) in reporting according to Breast Imaging Reporting and Data System (BI-RADS((R))) breast density categories was tested in 12 dedicated breast radiologists reading a digitized set of 100 two-view mammograms. Average intraobserver agreement was substantial (kappa=0.71, range 0.32-0.88) on a four-grade scale (D1/D2/D3/D4) and almost perfect (kappa=0.81, range 0.62-1.00) on a two-grade scale (D1-2/D3-4). Average interobserver agreement was moderate (kappa=0.54, range 0.02-0.77) on a four-grade scale and substantial (kappa=0.71, range 0.31-0.88) on a two-grade scale. Major disagreement was found for intermediate categories (D2=0.25, D3=0.28). Categorization of breast density according to BI-RADS is feasible and consistency is good within readers and reasonable between readers. Interobserver inconsistency does occur, and checking the adoption of proper criteria through a proficiency test and appropriate training might be useful. As inconsistency is probably due to erroneous perception of classification criteria, standard sets of reference images should be made available for training.
科研通智能强力驱动
Strongly Powered by AbleSci AI