计算机科学
人工智能
计算机视觉
目标检测
高光谱成像
遥感
合成孔径雷达
像素
卷积神经网络
多光谱图像
测距
透视图(图形)
深度学习
模式识别(心理学)
电信
地理
作者
Manish Sharma,Mayur Dhanaraj,Srivallabha Karnam,Dimitris G. Chachlakis,Raymond Ptucha,Panos P. Markopoulos,Eli Saber
标识
DOI:10.1109/jstars.2020.3041316
摘要
Deep-learning object detection methods that are designed for computer vision applications tend to underperform when applied to remote sensing data. This is because contrary to computer vision, in remote sensing, training data are harder to collect and targets can be very small, occupying only a few pixels in the entire image, and exhibit arbitrary perspective transformations. Detection performance can improve by fusing data from multiple remote sensing modalities, including red, green, blue, infrared, hyperspectral, multispectral, synthetic aperture radar, and light detection and ranging, to name a few. In this article, we propose YOLOrs: a new convolutional neural network, specifically designed for real-time object detection in multimodal remote sensing imagery. YOLOrs can detect objects at multiple scales, with smaller receptive fields to account for small targets, as well as predict target orientations. In addition, YOLOrs introduces a novel mid-level fusion architecture that renders it applicable to multimodal aerial imagery. Our experimental studies compare YOLOrs with contemporary alternatives and corroborate its merits.
科研通智能强力驱动
Strongly Powered by AbleSci AI