人工智能
计算机科学
图像融合
深度学习
稳健性(进化)
计算机视觉
编码器
可微函数
无监督学习
模式识别(心理学)
机器学习
图像(数学)
数学
基因
操作系统
生物化学
数学分析
化学
作者
Shaozhuang Ye,Tuo Wang,Mingyue Ding,Xuming Zhang
出处
期刊:IEEE Transactions on Medical Imaging
[Institute of Electrical and Electronics Engineers]
日期:2023-11-01
卷期号:42 (11): 3348-3361
被引量:2
标识
DOI:10.1109/tmi.2023.3283517
摘要
Multimodal medical image fusion (MMIF) is highly significant in such fields as disease diagnosis and treatment. The traditional MMIF methods are difficult to provide satisfactory fusion accuracy and robustness due to the influence of such possible human-crafted components as image transform and fusion strategies. Existing deep learning based fusion methods are generally difficult to ensure image fusion effect due to the adoption of a human-designed network structure and a relatively simple loss function and the ignorance of human visual characteristics during weight learning. To address these issues, we have presented the foveated differentiable architecture search (F-DARTS) based unsupervised MMIF method. In this method, the foveation operator is introduced into the weight learning process to fully explore human visual characteristics for the effective image fusion. Meanwhile, a distinctive unsupervised loss function is designed for network training by integrating mutual information, sum of the correlations of differences, structural similarity and edge preservation value. Based on the presented foveation operator and loss function, an end-to-end encoder-decoder network architecture will be searched using the F-DARTS to produce the fused image. Experimental results on three multimodal medical image datasets demonstrate that the F-DARTS performs better than several traditional and deep learning based fusion methods by providing visually superior fused results and better objective evaluation metrics.
科研通智能强力驱动
Strongly Powered by AbleSci AI