计算机科学
融合
人工智能
目标检测
计算机视觉
传感器融合
算法
对象(语法)
模式识别(心理学)
语言学
哲学
作者
Wenhao Cai,Yajun Chen,Xiao‐Yang Qiu,Meiqi Niu,J.-Y. Li
出处
期刊:IEEE Access
[Institute of Electrical and Electronics Engineers]
日期:2025-01-01
卷期号:13: 69967-69979
被引量:2
标识
DOI:10.1109/access.2025.3558574
摘要
Object detection in low-light scenarios has a wide range of applications, but existing algorithms often struggle to preserve the scarce low-level features in dark environments and exhibit limitations in localization accuracy for blurred edges and occluded objects, leading to suboptimal performance. To address these challenges, we propose an improved neck structure, SRB-FPN, to achieve fine-grained cross-level semantic alignment and feature fusion, while also optimizing the regression loss function to develop LLD-YOLO, a detector specifically designed for low-light conditions. To enhance the representation of key feature units and dynamically optimize the fusion weights between shallow and deep features, we introduce the SDFBF module. To improve the diversity of receptive fields and strengthen the network’s multi-scale feature capture capability, we incorporate the DBB-C2f module. Furthermore, we integrate the hard-sample focusing property of Focaler IoU with the geometric perception advantages of MPDIoU, proposing Focal MPDIoU Loss to refine the localization of difficult samples and precisely capture bounding box variations. Ultimately, LLD-YOLO achieves an mAP50 of 70.0% on the ExDark dataset, outperforming the baseline by 2.7 percentage points. Extensive experiments on three public datasets, ExDark, NOD, and RTTS, further validate the superior performance of the proposed method in low-light conditions and its strong adaptability to foggy environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI