计算机科学
模式
模态(人机交互)
人工智能
杠杆(统计)
脑瘤
深度学习
分级(工程)
磁共振成像
机器学习
放射科
医学
工程类
社会科学
土木工程
病理
社会学
作者
Dunyuan Xu,Xi Wang,Jinyue Cai,Pheng‐Ann Heng
出处
期刊:Cornell University - arXiv
日期:2024-01-01
被引量:3
标识
DOI:10.48550/arxiv.2401.09029
摘要
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly. Accurate identification of the type and grade of tumor in the early stages plays an important role in choosing a precise treatment plan. The Magnetic Resonance Imaging (MRI) protocols of different sequences provide clinicians with important contradictory information to identify tumor regions. However, manual assessment is time-consuming and error-prone due to big amount of data and the diversity of brain tumor types. Hence, there is an unmet need for MRI automated brain tumor diagnosis. We observe that the predictive capability of uni-modality models is limited and their performance varies widely across modalities, and the commonly used modality fusion methods would introduce potential noise, which results in significant performance degradation. To overcome these challenges, we propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading. To balance the tradeoff between model efficiency and efficacy, we employ ResNet Mix Convolution as the backbone network for feature extraction. Besides, dual attention is applied to capture the semantic interdependencies in spatial and slice dimensions respectively. To facilitate information interaction among modalities, we design a cross-modality guidance-aided module where the primary modality guides the other secondary modalities during the process of training, which can effectively leverage the complementary information of different MRI modalities and meanwhile alleviate the impact of the possible noise.
科研通智能强力驱动
Strongly Powered by AbleSci AI