图像分割
人工智能
翻译(生物学)
计算机视觉
计算机科学
分割
尺度空间分割
模式识别(心理学)
医学影像学
生物化学
化学
信使核糖核酸
基因
作者
Tianyang Zhang,Shaoming Zheng,Jun Cheng,Xi Jia,Joseph W. Bartlett,Xinxing Cheng,Zhaowen Qiu,Huazhu Fu,Jiang Liu,Aleš Leonardis,Jinming Duan
标识
DOI:10.1109/tpami.2024.3434435
摘要
Data distribution gaps often pose significant challenges to the use of deep segmentation models. However, retraining models for each distribution is expensive and time-consuming. In clinical contexts, device-embedded algorithms and networks, typically unretrainable and unaccessable post-manufacture, exacerbate this issue. Generative translation methods offer a solution to mitigate the gap by transferring data across domains. However, existing methods mainly focus on intensity distributions while ignoring the gaps due to structure disparities. In this paper, we formulate a new image-to-image translation task to reduce structural gaps. We propose a simple, yet powerful Structure-Unbiased Adversarial (SUA) network which accounts for both intensity and structural differences between the training and test sets for segmentation. It consists of a spatial transformation block followed by an intensity distribution rendering module. The spatial transformation block is proposed to reduce the structural gaps between the two images. The intensity distribution rendering module then renders the deformed structure to an image with the target intensity distribution. Experimental results show that the proposed SUA method has the capability to transfer both intensity distribution and structural content between multiple pairs of datasets and is superior to prior arts in closing the gaps for improving segmentation.
科研通智能强力驱动
Strongly Powered by AbleSci AI