可信赖性
医学
不确定度量化
深度学习
医学影像学
人工智能
图像(数学)
专家启发
置信区间
机器学习
数据科学
放射科
统计
计算机科学
数学
内科学
计算机安全
作者
Shahriar Faghani,Mana Moassefi,Pouria Rouzrokh,Bardia Khosravi,Francis I. Baffour,Michael D. Ringler,Bradley J. Erickson
出处
期刊:Radiology
[Radiological Society of North America]
日期:2023-08-01
卷期号:308 (2)
被引量:33
标识
DOI:10.1148/radiol.222217
摘要
In recent years, deep learning (DL) has shown impressive performance in radiologic image analysis. However, for a DL model to be useful in a real-world setting, its confidence in a prediction must also be known. Each DL model's output has an estimated probability, and these estimated probabilities are not always reliable. Uncertainty represents the trustworthiness (validity) of estimated probabilities. The higher the uncertainty, the lower the validity. Uncertainty quantification (UQ) methods determine the uncertainty level of each prediction. Predictions made without UQ methods are generally not trustworthy. By implementing UQ in medical DL models, users can be alerted when a model does not have enough information to make a confident decision. Consequently, a medical expert could reevaluate the uncertain cases, which would eventually lead to gaining more trust when using a model. This review focuses on recent trends using UQ methods in DL radiologic image analysis within a conceptual framework. Also discussed in this review are potential applications, challenges, and future directions of UQ in DL radiologic image analysis. © RSNA, 2023
科研通智能强力驱动
Strongly Powered by AbleSci AI