计算机科学
人工智能
分割
计算机视觉
融合
图像分割
乳腺肿瘤
超声波
模式识别(心理学)
乳腺癌
放射科
医学
癌症
内科学
哲学
语言学
作者
Guangzhe Zhao,Xingguo Zhu,Xueping Wang,Feihu Yan,Maozu Guo
标识
DOI:10.1109/jbhi.2024.3514134
摘要
Accurate breast tumor segmentation in ultrasound images is a crucial step in medical diagnosis and locating the tumor region. However, segmentation faces numerous challenges due to the complexity of ultrasound images, similar intensity distributions, variable tumor morphology, and speckle noise. To address these challenges and achieve precise segmentation of breast tumors in complex ultrasound images, we propose a Synchronous Frequency-perception Fusion Network (Syn-Net). Initially, we design a synchronous dual-branch encoder to extract local and global feature information simultaneously from complex ultrasound images. Secondly, we introduce a novel Frequency- perception Cross-Feature Fusion (FrCFusion) Block, which utilizes Discrete Cosine Transform (DCT) to learn all-frequency features and effectively fuse local and global features while mitigating issues arising from similar intensity distributions. In addition, we develop a Full-Scale Deep Supervision method that not only corrects the influence of speckle noise on segmentation but also effectively guides decoder features towards the ground truth. We conduct extensive experiments on three publicly available ultrasound breast tumor datasets. Comparison with 14 state-of-the-art deep learning segmentation methods demonstrates that our approach exhibits superior sensitivity to different ultrasound images, variations in tumor size and shape, speckle noise, and similarity in intensity distribution between surrounding tissues and tumors. On the BUSI and Dataset B datasets, our method achieves better Dice scores compared to state-of-the-art methods, indicating superior performance in ultrasound breast tumor segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI