图像分割
计算机视觉
计算机科学
人工智能
医学影像学
分割
图像(数学)
可视化
作者
Qingjie Zeng,Huan Luo,Zilin Lu,Yutong Xie,Zhiyong Wang,Yanning Zhang,Yong Xia
标识
DOI:10.1109/tmi.2025.3601359
摘要
Pre-trained vision-language models (VLMs) and language models (LMs) have recently garnered significant attention due to their remarkable ability to represent textual concepts, opening up new avenues in vision tasks. In medical image segmentation, efforts are being made to integrate text and image data using VLMs and LMs. However, current text-enhanced approaches face several challenges. First, using separate pre-trained vision and text models to encode image and text data can result in semantic shifts. Second, while VLMs can establish the correspondence between visual and textual features when pre-trained on paired image-text data, this alignment often deteriorates during segmentation tasks due to misalignment between the text and vision components in ongoing learning. In this paper, we propose TeViA, a novel approach that seamlessly integrates with various vision and text models, irrespective of their pre-training relationships. This integration is achieved through a segmentation-specific text-to-vision alignment design, ensuring both information gain and semantic consistency. Specifically, for each training data, a foreground visual representation is extracted from the segmentation head and used to supervise projection layers, thereby adjusting the textual features to better contribute to the segmentation task. Additionally, a historic visual prototype is created by aggregating target semantics from all training data and is updated using a momentum-based manner. This prototype aims to enhance the visual representation of each data instance by establishing feature-level connections, which in turn refines the textual features. The superiority of TeViA is validated on five public datasets, exhibiting over 6% Dice improvements compared to vision-only methods. Code is available at: https://github.com/jgfiuuuu/TeViA.
科研通智能强力驱动
Strongly Powered by AbleSci AI