人工智能
自编码
图像融合
计算机科学
联营
串联(数学)
图像(数学)
融合规则
融合
深度学习
医学影像学
模式识别(心理学)
计算
GSM演进的增强数据速率
计算机视觉
算法
数学
语言学
组合数学
哲学
作者
Payal P. Wankhede,Manisha Das,Deep Gupta,Petia Radeva,Ashwini M. Bakde
出处
期刊:Cornell University - arXiv
日期:2023-01-01
标识
DOI:10.48550/arxiv.2310.11896
摘要
Medical image fusion combines the complementary information of multimodal medical images to assist medical professionals in the clinical diagnosis of patients' disorders and provide guidance during preoperative and intra-operative procedures. Deep learning (DL) models have achieved end-to-end image fusion with highly robust and accurate fusion performance. However, most DL-based fusion models perform down-sampling on the input images to minimize the number of learnable parameters and computations. During this process, salient features of the source images become irretrievable leading to the loss of crucial diagnostic edge details and contrast of various brain tissues. In this paper, we propose a new multimodal medical image fusion model is proposed that is based on integrated Laplacian-Gaussian concatenation with attention pooling (LGCA). We prove that our model preserves effectively complementary information and important tissue structures.
科研通智能强力驱动
Strongly Powered by AbleSci AI