卷积神经网络
特征(语言学)
人工智能
模式识别(心理学)
双雷达
熵(时间箭头)
聚类分析
计算机科学
乳腺超声检查
人工神经网络
乳房成像
交叉熵
上下文图像分类
乳腺摄影术
医学
乳腺癌
图像(数学)
内科学
癌症
哲学
语言学
物理
量子力学
作者
Pengfei Xu,Jing Zhao,Mingxi Wan,Qing Song,Qiang Su,Diya Wang
摘要
Abstract Background Breast tumor is a fatal threat to the health of women. Ultrasound (US) is a common and economical method for the diagnosis of breast cancer. Breast imaging reporting and data system (BI‐RADS) category 4 has the highest false‐positive value of about 30% among five categories. The classification task in BI‐RADS category 4 is challenging and has not been fully studied. Purpose This work aimed to use convolutional neural networks (CNNs) for breast tumor classification using B‐mode images in category 4 to overcome the dependence on operator and artifacts. Additionally, this work intends to take full advantage of morphological and textural features in breast tumor US images to improve classification accuracy. Methods First, original US images coming directly from the hospital were cropped and resized. In 1385 B‐mode US BI‐RADS category 4 images, the biopsy eliminated 503 samples of benign tumor and left 882 of malignant. Then, K‐means clustering algorithm and entropy of sliding windows of US images were conducted. Considering the diversity of different characteristic information of malignant and benign represented by original B‐mode images, K‐means clustering images and entropy images, they are fused in a three‐channel form multi‐feature fusion images dataset. The training, validation, and test sets are 969, 277, and 139. With transfer learning, 11 CNN models including DenseNet and ResNet were investigated. Finally, by comparing accuracy, precision, recall, F1‐score, and area under curve (AUC) of the results, models which had better performance were selected. The normality of data was assessed by Shapiro‐Wilk test. DeLong test and independent t ‐test were used to evaluate the significant difference of AUC and other values. False discovery rate was utilized to ultimately evaluate the advantages of CNN with highest evaluation metrics. In addition, the study of anti‐log compression was conducted but no improvement has shown in CNNs classification results. Results With multi‐feature fusion images, DenseNet121 has highest accuracy of 80.22 ± 1.45% compared to other CNNs, precision of 77.97 ± 2.89% and AUC of 0.82 ± 0.01. Multi‐feature fusion improved accuracy of DenseNet121 by 1.87% from classification of original B‐mode images ( p < 0.05). Conclusion The CNNs with multi‐feature fusion show a good potential of reducing the false‐positive rate within category 4. The work illustrated that CNNs and fusion images have the potential to reduce false‐positive rate in breast tumor within US BI‐RADS category 4, and make the diagnosis of category 4 breast tumors to be more accurate and precise.
科研通智能强力驱动
Strongly Powered by AbleSci AI