模态(人机交互)
多光谱图像
人工智能
计算机科学
特征(语言学)
RGB颜色模型
计算机视觉
模式识别(心理学)
目标检测
特征选择
模式
图像融合
图像(数学)
社会学
哲学
语言学
社会科学
作者
Qingyun Fang,Zhaokui Wang
标识
DOI:10.1016/j.patcog.2022.108786
摘要
Cross-modality fusing complementary information of multispectral remote sensing image pairs can improve the perception ability of detection algorithms, making them more robust and reliable for a wider range of applications, such as nighttime detection. Compared with prior methods, we think different features should be processed specifically, the modality-specific features should be retained and enhanced, while the modality-shared features should be cherry-picked from the RGB and thermal IR modalities. Following this idea, a novel and lightweight multispectral feature fusion approach with joint common-modality and differential-modality attentions are proposed, named Cross-Modality Attentive Feature Fusion (CMAFF). Given the intermediate feature maps of RGB and thermal images, our module parallel infers attention maps from two separate modalities, common- and differential-modality, then the attention maps are multiplied to the input feature map respectively for adaptive feature enhancement or selection. Extensive experiments demonstrate that our proposed approach can achieve the state-of-the-art performance at a low computation cost.
科研通智能强力驱动
Strongly Powered by AbleSci AI