情态动词
分割
计算机科学
人工智能
融合
计算机视觉
对偶(语法数字)
图像分割
磁共振成像
机制(生物学)
模式识别(心理学)
医学
物理
放射科
材料科学
文学类
哲学
艺术
量子力学
高分子化学
语言学
作者
Hongyan Tang,Zhenxing Huang,Wenbo Li,Yaping Wu,Jianmin Yuan,Yang Yang,Yan Zhang,Jing Qin,Hairong Zheng,Dong Liang,Meiyun Wang,Zixiang Chen
标识
DOI:10.1109/jbhi.2024.3516012
摘要
The precise segmentation of different brain regions and tissues is usually a prerequisite for the detection and diagnosis of various neurological disorders in neuroscience. Considering the abundance of functional and structural dual-modality information for positron emission tomography/magnetic resonance (PET/MR) images, we propose a novel 3D whole-brain segmentation network with a cross-fusion mechanism introduced to obtain 45 brain regions. Specifically, the network processes PET and MR images simultaneously, employing UX-Net and a cross-fusion block for feature extraction and fusion in the encoder. We test our method by comparing it with other deep learning-based methods, including 3DUXNET, SwinUNETR, UNETR, nnFormer, UNet3D, NestedUNet, ResUNet, and VNet. The experimental results demonstrate that the proposed method achieves better segmentation performance in terms of both visual and quantitative evaluation metrics and achieves more precise segmentation in three views while preserving fine details. In particular, the proposed method achieves superior quantitative results, with a Dice coefficient of 85.73% 0.01%, a Jaccard index of 76.68% 0.02%, a sensitivity of 85.00% 0.01%, a precision of 83.26% 0.03% and a Hausdorff distance (HD) of 4.4885 14.86%. Moreover, the distribution and correlation of the SUV in the volume of interest (VOI) are also evaluated (PCC>0.9), indicating consistency with the ground truth and the superiority of the proposed method. In future work, we will utilize our whole-brain segmentation method in clinical practice to assist doctors in accurately diagnosing and treating brain diseases.
科研通智能强力驱动
Strongly Powered by AbleSci AI