人工智能
计算机科学
图像融合
计算机视觉
合成孔径雷达
特征(语言学)
融合
分解
模式识别(心理学)
特征提取
图像(数学)
遥感
地质学
生物
生态学
哲学
语言学
作者
Yuanxin Ye,Jiacheng Zhang,Liang Zhou,Jinjin Li,Xiaoyue Ren,Jianwei Fan
标识
DOI:10.1109/tgrs.2024.3366519
摘要
With the expansion of optical and SAR image fusion application scenarios, it is necessary to integrate their information in land classification, feature recognition, and target tracking. Current methods focus excessively on integrating multimodal feature information to enhance the information richness of the fused images, whereas neglecting the highly corrupted visual perception of the fused results by modal differences and SAR speckle noise. To address that, this paper proposes a novel optical and SAR image fusion framework named Visual Saliency Features Fusion (VSFF), which is based on the extraction and balancing of significant complementary features of optical and SAR images. Firstly, we propose a decomposition algorithm of complementary features to divide the image into main structure features and detail texture features. Then, for the fusion of main structure features, we reconstruct the visual saliency features maps of the pixel and structure that contain significant information from optical and SAR images, and input them into a total variation constraint model to compute the fusion result and achieve the optimal information transfer. Meanwhile, we construct a new feature descriptor based on the Gabor wavelet, that separates meaningful detail texture features from residual noise and selectively preserves features that can improve the interpretability of fusion result. In a comparative analysis with seven state-of-the-art fusion algorithms, VSFF achieved better results in qualitative and quantitative evaluations, and our fused images have a clear and appropriate visual perception. The source code is publicly available at https://github.com/yeyuanxin110/VSFF.
科研通智能强力驱动
Strongly Powered by AbleSci AI