情态动词
分割
适应(眼睛)
计算机科学
人工智能
计算机视觉
模式识别(心理学)
材料科学
物理
光学
复合材料
作者
Xiaoyu Shi,Rahul Kumar Jain,Yinhao Li,Shurong Chai,Jingliang Cheng,Jie Bai,Guohua Zhao,Lanfen Lin,Yen‐Wei Chen
出处
期刊:ACM transactions on computing for healthcare
[Association for Computing Machinery]
日期:2025-03-20
摘要
The segmentation of glioma is crucial for early diagnosis, according to a World Health Organization (WHO) 2021 report. For glioma diagnosis, 3D multi-modal brain MRI/CT imaging has become an essential tool, offering detailed information. Nowadays, deep learning frameworks have been applied to various medical imaging problems, including brain glioma segmentation. Recently, Foundation models like Segment Anything (SAM) have emerged as pivotal tools in computer vision tasks. These models are trained using large (real-world) datasets, offering a generalized understanding of visual data and semantic key features. Therefore, the effective utilization of foundation models in medical imaging is a significant area of current research. However, the differences in data distribution between multi-modal medical images and real-world images presents challenges in directly applying foundation models to medical imaging. Additionally, utilizing multi-modal images to extract crucial information and its fusion poses further challenges. To address these issues, we propose a framework using Foundation model and novel strategies for multimodal fusion. Our fusion adapters effectively integrate the information from different modalities to enhance glioma segmentation in multi-modal MRI scans. Our method outperforms current state-of-the-art methods for accurate segmentation of the glioma using private and publicly available brain MRI datasets, proving the effectiveness of our approach across different datasets and imaging modalities.
科研通智能强力驱动
Strongly Powered by AbleSci AI