秩(图论)
融合
计算机科学
人工智能
数学
哲学
语言学
组合数学
作者
Yajun An,Rushi Lan,Huan Lin,Yumeng Wang,Henghua Deng,Zhenbing Liu,Lin Wang,Zaiyi Liu,Cheng Lu,Huihua Yang,Xipeng Pan
标识
DOI:10.1109/tcbbio.2025.3578334
摘要
To improve the overall survival rate of cancer patients, we propose an innovative approach named Multimodal Fusion Framework based on Low-rank Interaction (MF2LI), which aims to overcome the current limitations of relying solely on single-modal data prediction and the excessive complexity of fusion. By harnessing low-rank multimodal fusion (LMF) and optimal weight integration (OWI), MF2LI maximizes the integration of pathological images and genomic data. The model incorporates a parallel decomposition strategy, reducing complexity and facilitating fusion based on the contributions of each component. We validate our method using the GBMLGG and KIRC datasets from The Cancer Genome Atlas (TCGA). The C-index of the proposed model stands at $0.895 \pm 0.007$ and $0.728 \pm 0.030$ for the two datasets, respectively, outperforming existing methods. Furthermore, we generate visualizations of the risk ratios, which demonstrate a strong alignment with the actual grade classifications. Extensive experiments have shown that our model improves the prognosis prediction of tumor patients and has considerable clinical value.
科研通智能强力驱动
Strongly Powered by AbleSci AI