联营
棱锥(几何)
人工智能
计算机科学
分割
比例(比率)
计算机视觉
地图学
数学
几何学
地理
作者
Chenglu Zong,W. Gao,Yu Fang,Fengjuan Gao,Zuxiang Wang
摘要
ABSTRACT The Cup to Disc Ratio (CDR) is a valuable metric for assessing the relative size of the Optic Cup (OC) and Optic Disc (OD), playing a crucial role in glaucoma diagnosis. Accurate segmentation of the OC and OD is therefore the first step toward reliable glaucoma detection. However, precise segmentation is challenging due to the presence of blood vessels that traverse the OC and OD regions, as well as the blurred boundaries and relatively small proportions of the OC and OD. To address these challenges, Atrous Spatial Pyramid CrossFormer‐U‐Net (ACC‐U‐Net) is proposed to achieve accurate OC and OD segmentation. CrossFormer is integrated into the encoder to enhance the integrity of the OC and OD segmentation boundaries by constructing global attention mechanisms in both the horizontal and vertical directions. Additionally, an Atrous Spatial Pyramid Pooling (ASPP) head is incorporated at the end of the decoder, allowing the model to capture multi‐level feature information of the OC and OD through multiple parallel dilated convolutions, which improves the segmentation accuracy of both the OC, OD, and their irregular boundaries. Finally, Cross Entropy and Dice (CD) Loss is introduced to enhance the model's focus on the OC, which solves the problem of the OC being easily overlooked by the model due to its small proportion. Ablation studies and comparative experiments are performed on three publicly available datasets. Compared to U‐Net, the proposed ACC‐U‐Net shows significant improvements in segmentation accuracy, with mean Intersection over Union (mIoU), mean Dice, and mean Accuracy (mACC) increasing by 9.96%/2.75%/4.54%, 2.65%/2.94%/5.31%, and 5.89%/5.57%/4.21%, respectively. Moreover, the proposed model outperforms nine other models in segmentation accuracy on three datasets. Thus, ACC‐U‐Net accurately segments the OC and OD, thus providing precise CDR values that could assist in the diagnosis of glaucoma. Source code and pretrained models are available at: https://github.com/zong1019/segmentation‐OCOD.git .
科研通智能强力驱动
Strongly Powered by AbleSci AI