计算机科学
图像融合
模态(人机交互)
人工智能
融合
保险丝(电气)
模式
空间频率
情态动词
合并(版本控制)
计算机视觉
模式识别(心理学)
图像(数学)
情报检索
工程类
物理
哲学
语言学
高分子化学
社会学
电气工程
光学
社会科学
化学
作者
Xian‐Ming Gu,Lihui Wang,Zeyu Deng,Ying Cao,Xingyu Huang,Yuemin Zhu
出处
期刊:Cornell University - arXiv
日期:2023-10-09
被引量:3
标识
DOI:10.48550/arxiv.2310.05462
摘要
Multi-modal medical image fusion is essential for the precise clinical diagnosis and surgical navigation since it can merge the complementary information in multi-modalities into a single image. The quality of the fused image depends on the extracted single modality features as well as the fusion rules for multi-modal information. Existing deep learning-based fusion methods can fully exploit the semantic features of each modality, they cannot distinguish the effective low and high frequency information of each modality and fuse them adaptively. To address this issue, we propose AdaFuse, in which multimodal image information is fused adaptively through frequency-guided attention mechanism based on Fourier transform. Specifically, we propose the cross-attention fusion (CAF) block, which adaptively fuses features of two modalities in the spatial and frequency domains by exchanging key and query values, and then calculates the cross-attention scores between the spatial and frequency features to further guide the spatial-frequential information fusion. The CAF block enhances the high-frequency features of the different modalities so that the details in the fused images can be retained. Moreover, we design a novel loss function composed of structure loss and content loss to preserve both low and high frequency information. Extensive comparison experiments on several datasets demonstrate that the proposed method outperforms state-of-the-art methods in terms of both visual quality and quantitative metrics. The ablation experiments also validate the effectiveness of the proposed loss and fusion strategy.
科研通智能强力驱动
Strongly Powered by AbleSci AI