图像分割
计算机科学
人工智能
计算机视觉
分割
图像处理
医学影像学
比例(比率)
图像(数学)
模式识别(心理学)
量子力学
物理
作者
Dandan Shan,Zihan Li,Yunxiang Li,Qingde Li,Jie Tian,Qingqi Hong
标识
DOI:10.1109/tip.2025.3571672
摘要
Accurate segmentation of lesions plays a critical role in medical image analysis and diagnosis. Traditional segmentation approaches that rely solely on visual features often struggle with the inherent uncertainty in lesion distribution and size. To address these issues, we propose STPNet, a Scale-aware Text Prompt Network that leverages vision-language modeling to enhance medical image segmentation. Our approach utilizes multi-scale textual descriptions to guide lesion localization and employs retrieval-segmentation joint learning to bridge the semantic gap between visual and linguistic modalities. Crucially, STPNet retrieves relevant textual information from a specialized medical text repository during training, eliminating the need for text input during inference while retaining the benefits of cross-modal learning. We evaluate STPNet on three datasets: COVID-Xray, COVID-CT, and Kvasir-SEG. Experimental results show that our vision-language approach outperforms state-of-the-art segmentation methods, demonstrating the effectiveness of incorporating textual semantic knowledge into medical image analysis. The code has been made publicly on https://github.com/HUANGLIZI/STPNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI