计算机科学
目标检测
人工智能
光学(聚焦)
串联(数学)
块(置换群论)
计算机视觉
特征(语言学)
航空影像
深度学习
图像(数学)
模式识别(心理学)
组合数学
光学
物理
哲学
语言学
数学
几何学
作者
Zhongyu Zhang,Yunpeng Liu,Tianci Liu,Zhiyuan Lin,Sikui Wang
出处
期刊:IEEE Geoscience and Remote Sensing Letters
[Institute of Electrical and Electronics Engineers]
日期:2020-11-01
卷期号:17 (11): 1884-1888
被引量:30
标识
DOI:10.1109/lgrs.2019.2956513
摘要
Real-time small object detection from the remote sensing images taken by unmanned aerial vehicles (UAVs) is a challenging but fundamental problem for many UAV applications because of the complex scales, densities, and shapes of objects that are the result of the shooting angle of the UAV. In this letter, we focus on real-time small vehicle detection for UAV remote sensing images and propose a depthwise-separable attention-guided network (DAGN) based on YOLOv3. First, we combine the feature concatenation and attention block to provide the model with the excellent ability to distinguish important and inconsequential features. Then, we improve the loss function and candidate merging algorithm in YOLOv3. Through these strategies, the performance of vehicle detection is improved, while some detection speed is sacrificed. To accelerate our model, we replace some standard convolutions with depthwise-separable convolutions. Compared to YOLOv3 and other two-stage state-of-the-art models that are applied to Vehicle Detection in Aerial Imagery (VEDAI) data sets, DAGN has a detection accuracy of 0.671, which is 5.5% better than that of YOLOv3, and it achieves the same results as two-stage methods. In addition, DAGN achieves real-time detection using GeForce GTX 1080Ti.
科研通智能强力驱动
Strongly Powered by AbleSci AI