计算机科学
分割
人工智能
保险丝(电气)
增采样
模式识别(心理学)
特征(语言学)
串联(数学)
卷积神经网络
计算机视觉
图像(数学)
组合数学
哲学
工程类
电气工程
语言学
数学
作者
Haoyang Zheng,Wei Zou,Nan Hu,Jiajun Wang
标识
DOI:10.1088/1361-6560/ad7f1b
摘要
Abstract Objective . Joint segmentation of tumors in positron emission tomography-computed tomography (PET-CT) images is crucial for precise treatment planning. However, current segmentation methods often use addition or concatenation to fuse PET and CT images, which potentially overlooks the nuanced interplay between these modalities. Additionally, these methods often neglect multi-view information that is helpful for more accurately locating and segmenting the target structure. This study aims to address these disadvantages and develop a deep learning-based algorithm for joint segmentation of tumors in PET-CT images. Approach . To address these limitations, we propose the Multi-view Information Enhancement and Multi-modal Feature Fusion Network (MIEMFF-Net) for joint tumor segmentation in three-dimensional PET-CT images. Our model incorporates a dynamic multi-modal fusion strategy to effectively exploit the metabolic and anatomical information from PET and CT images and a multi-view information enhancement strategy to effectively recover the lost information during upsamping. A Multi-scale Spatial Perception Block is proposed to effectively extract information from different views and reduce redundancy interference in the multi-view feature extraction process. Main results . The proposed MIEMFF-Net achieved a Dice score of 83.93%, a Precision of 81.49%, a Sensitivity of 87.89% and an IOU of 69.27% on the Soft Tissue Sarcomas dataset and a Dice score of 76.83%, a Precision of 86.21%, a Sensitivity of 80.73% and an IOU of 65.15% on the AutoPET dataset. Significance . Experimental results demonstrate that MIEMFF-Net outperforms existing state-of-the-art models which implies potential applications of the proposed method in clinical practice.
科研通智能强力驱动
Strongly Powered by AbleSci AI