模式识别(心理学)
接头(建筑物)
深度学习
特征(语言学)
比例(比率)
计算机视觉
人工神经网络
背景(考古学)
联营
棱锥(几何)
图像(数学)
作者
Xin Yuan,Lingxiao Zhou,Shuyang Yu,Miao Li,Xiang Wang,Xiujuan Zheng
标识
DOI:10.1016/j.artmed.2021.102035
摘要
Glaucoma is the leading cause of irreversible blindness. For glaucoma screening, the cup to disc ratio (CDR) is a significant indicator, whose calculation relies on the segmentation of optic disc(OD) and optic cup(OC) in color fundus images. This study proposes a residual multi-scale convolutional neural network with a context semantic extraction module to jointly segment the OD and OC. The proposed method uses a W-shaped backbone network, including image pyramid multi-scale input with the side output layer as an early classifier to generate local prediction output. The proposed method includes a context extraction module that extracts contextual semantic information from multiple level receptive field sizes and adaptively recalibrates channel-wise feature responses. It can effectively extract global information and reduce the semantic gaps in the fusion of deep and shallow semantic information. We validated the proposed method on four datasets, including DRISHTI-GS1, REFUGE, RIM-ONE r3, and a private dataset. The overlap errors are 0.0540, 0.0684, 0.0492, 0.0511 in OC segmentation and 0.2332, 0.1777, 0.2372, 0.2547 in OD segmentation, respectively. Experimental results indicate that the proposed method can estimate the CDR for a large-scale glaucoma screening.
科研通智能强力驱动
Strongly Powered by AbleSci AI