情态动词
计算机科学
分割
人工智能
磁共振成像
放射科
医学
计算机视觉
材料科学
高分子化学
作者
Cheng Chen,Min Deng,Yuan Zhong,Jinyue Cai,Karen Kar Wun Chan,Qi Dou,Kelvin Kam Lung Chong,Pheng‐Ann Heng,Chiu‐Wing Winnie Chu
标识
DOI:10.1109/jbhi.2025.3545138
摘要
Thyroid-associated orbitopathy (TAO) is a prevalent inflammatory autoimmune disorder, leading to orbital disfigurement and visual disability. Automatic comprehensive segmentation tailored for quantitative multi-modal MRI assessment of TAO holds enormous promise but is still lacking. In this paper, we propose a novel method, named cross-modal attentive self-training (CMAST), for the multi-organ segmentation in TAO using partially labeled and unaligned multi-modal MRI data. Our method first introduces a dedicatedly designed cross-modal pseudo label self-training scheme, which leverages self-training to refine the initial pseudo labels generated by cross-modal registration, so as to complete the label sets for comprehensive segmentation. With the obtained pseudo labels, we further devise a learnable attentive fusion module to aggregate multi-modal knowledge based on learned cross-modal feature attention, which relaxes the requirement of pixel-wise alignment across modalities. A prototypical contrastive learning loss is further incorporated to facilitate cross-modal feature alignment. We evaluate our method on a large clinical TAO cohort with 100 cases of multi-modal orbital MRI. The experimental results demonstrate the promising performance of our method in achieving comprehensive segmentation of TAO-affected organs on both T1 and T1c modalities, outperforming previous methods by a large margin. Code will be released upon acceptance.
科研通智能强力驱动
Strongly Powered by AbleSci AI