棱锥(几何)
对偶(语法数字)
计算机科学
情态动词
人工智能
特征(语言学)
计算机视觉
融合
对象(语法)
模式识别(心理学)
数学
几何学
材料科学
文学类
哲学
语言学
高分子化学
艺术
作者
Jinpeng Wang,Nan Su,Chunhui Zhao,Yiming Yan,Shou Feng
出处
期刊:Remote Sensing
[Multidisciplinary Digital Publishing Institute]
日期:2024-10-21
卷期号:16 (20): 3904-3904
被引量:6
摘要
With the simultaneous acquisition of the infrared and optical remote sensing images of the same target becoming increasingly easy, using multi-modal data for high-performance object detection has become a research focus. In remote sensing multi-modal data, infrared images lack color information, it is hard to detect difficult targets with low contrast, and optical images are easily affected by illuminance. One of the most effective ways to solve this problem is to integrate multi-modal images for high-performance object detection. The challenge of fusion object detection lies in how to fully integrate multi-modal image features with significant modal differences and avoid introducing interference information while taking advantage of complementary advantages. To solve these problems, a new multi-modal fusion object detection method is proposed. In this paper, the method is improved in terms of two aspects: firstly, a new dual-branch asymmetric attention backbone network (DAAB) is designed, which uses a semantic information supplement module (SISM) and a detail information supplement module (DISM) to supplement and enhance infrared and RGB image information, respectively. Secondly, we propose a feature fusion pyramid network (FFPN), which uses a Transformer-like strategy to carry out multi-modal feature fusion and suppress features that are not conducive to fusion during the fusion process. This method is a state-of-the-art process for both FLIR-aligned and DroneVehicle datasets. Experiments show that this method has strong competitiveness and generalization performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI