计算机科学
人工智能
卷积神经网络
像素
数据集
图像质量
模式识别(心理学)
图像分割
分割
图像处理
计算机视觉
图像(数学)
作者
Maitham D Naeemi,Johnny Ren,Nathan Hollcroft,Adam Alessio,Sohini Roychowdhury
标识
DOI:10.1109/bigdata.2016.7841003
摘要
With the increasing applications of Big Data analytics in medical image processing systems, there has been a growing need for quantitative medical image quality assessment techniques. Specifically for computed tomography (CT) images, quantitative image assessment can allow for benchmarking image processing methods and optimization of image acquisition parameters. In this work, large volumes of CT images from phantoms and patients are analyzed using 3 data models that vary in their implementation time complexities. The goal here is to identify the optimal method that scales across data set variabilities for predictive modeling of CT image quality (CTIQ). The first two models rely on spatial segmentation of regions-of-interest (ROIs) and estimate CTIQs in terms of segmented pixel variabilities. The third, convolutional neural network (CNN) model relies on error back-propagation from the training set of images to learn the regions indicative of CTIQ. We observe that for 70/30 data split, the average multi-class classification accuracies for CTIQ prediction using the 3 data models range from 73.6-100% and 50-100% for the phantom and patient CT images, respectively. Using variance of pixels within the segmented ROIs as a CTIQ classification parameter, the spatial segmentation data models are found to be more generalizable that the CNN model. However, the CNN model is found to be more suitable for CT image texture classification in the absence of structural variabilities. Our analysis demonstrates that spatial ROI segmentation data models are consistent CTIQ estimators while the CNN models are consistent identifiers of structural similarities for CT image data sets.
科研通智能强力驱动
Strongly Powered by AbleSci AI