计算机科学
模式
模态(人机交互)
人工智能
传感器融合
机器学习
特征(语言学)
互补性(分子生物学)
可信赖性
数据挖掘
模式识别(心理学)
哲学
社会学
生物
遗传学
语言学
计算机安全
社会科学
作者
Zongbo Han,Fan Yang,Junzhou Huang,Changqing Zhang,Jianhua Yao
标识
DOI:10.1109/cvpr52688.2022.02005
摘要
Integration of heterogeneous and high-dimensional data (e.g., multiomics) is becoming increasingly important. Existing multimodal classification algorithms mainly focus on improving performance by exploiting the complementarity from different modalities. However, conventional approaches are basically weak in providing trustworthy multimodal fusion, especially for safety-critical applications (e.g., medical diagnosis). For this issue, we propose a novel trustworthy multimodal classification algorithm termed Multimodal Dynamics, which dynamically evaluates both the feature-level and modality-level informativeness for different samples and thus trustworthily integrates multiple modalities. Specifically, a sparse gating is introduced to capture the information variation of each within-modality feature and the true class probability is employed to assess the classification confidence of each modality. Then a transparent fusion algorithm based on the dynamical informativeness estimation strategy is induced. To the best of our knowledge, this is the first work to jointly model both feature and modality variation for different samples to provide trustworthy fusion in multi-modal classification. Extensive experiments are conducted on multimodal medical classification datasets. In these experiments, superior performance and trustworthiness of our algorithm are clearly validated compared to the state-of-the-art methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI