作者
Huaxiang Song,Junping Xie,Yunyang Wang,Lihua Fu,Yang Zhou,Xing Zhou
摘要
ABSTRACT Existing Vision Transformer (ViT)‐based object detection methods for remote sensing images (RSIs) face significant challenges due to the scarcity of RSI samples and the over‐reliance on enhancement strategies originally developed for natural images. This often leads to inconsistent data distributions between training and testing subsets, resulting in degraded model performance. In this study, we introduce an optimized data distribution learning (ODDL) strategy and develop an object detection framework based on the Faster R‐CNN architecture, named ODDL‐Net. The ODDL strategy begins with an optimized augmentation (OA) technique, overcoming the limitations of conventional data augmentation methods. Next, we propose an optimized mosaic algorithm (OMA), improving upon the shortcomings of traditional Mosaic augmentation techniques. Additionally, we introduce a feature fusion regularization (FFR) method, addressing the inherent limitations of classic feature pyramid networks. These innovations are integrated into three modular, plug‐and‐play components—namely, the OA, OMA, and FFR modules—ensuring that the ODDL strategy can be seamlessly incorporated into existing detection frameworks without requiring significant modifications. To evaluate the effectiveness of the proposed ODDL‐Net, we develop two variants based on different ViT architectures: the Next ViT (NViT) small model and the Swin Transformer (SwinT) tiny model, both used as detection backbones. Experimental results on the NWPU10, DIOR20, MAR20, and GLH‐Bridge datasets demonstrate that both variants of ODDL‐Net achieve impressive accuracy, surpassing 23 state‐of‐the‐art methods introduced since 2023. Specifically, ODDL‐Net‐NViT attained accuracies of 78.3% on the challenging DIOR20 dataset and 61.4% on the GLH‐Bridge dataset. Notably, this represents a substantial improvement of approximately 23% over the Faster R‐CNN‐ResNet50 baseline on the DIOR20 dataset. In conclusion, this study demonstrates that ViTs are well suited for high‐accuracy object detection in RSIs. Furthermore, it provides a straightforward solution for building ViT‐based detectors, offering a practical approach that requires little model modification.