计算机科学
剪切波
人工智能
图像(数学)
脉搏(音乐)
人工神经网络
融合
图像融合
模式识别(心理学)
计算机视觉
电信
语言学
探测器
哲学
作者
Vella Satyanarayana,P. Mohanaiah
标识
DOI:10.1038/s41598-025-88701-1
摘要
Image fusion involves combining details from two or more different imaging techniques, say MRI and PET images and provides a better image for diagnosis and treatment. Despite the fact standard spatial domain methods are being used successfully, including simple late fusion based on min/max fusion and far more complex content-aware pixel-wise mapping, key features are sometimes not well preserved. The domain transforms especially the WT-based fusion process, have brought significant improvements in literature hyper corrigibility, primarily because of its efficient computational performances along with its non-specificity of the image content domain. However, the directionality of the singularities is somewhat lost in the wavelet transform, due to which representation of truly distributed singularities is inherently limited. To overcome this limitation, the present work uses a non-subsampled shearlet transform (NSST) for medical image fusion, as it is effective in multi-directional and multiscale representation. The method proposed here firstly involves applying NSST to the source images to yield their lowpass and high-pass subbands. A pulse-coupled neural network (PCNN) is then used on these subbands to decide the best fusion rule to maintain most of the important structural and textural information. Last but not least, an inverse shearlet transform reconstructs the fused image using the processed sub-bands as inputs. Entropy, standard deviation, and the structural similarity index (SSIM) have been used quantitatively to assess the performance of the proposed fusion scheme. Experimental analysis using brain MRI/PET image databases shows that the proposed fusion method achieves better performance than the existing image fusion techniques and provides higher image quality and improved feature details.
科研通智能强力驱动
Strongly Powered by AbleSci AI