作者
Yue Liao,Lerong Li,Han Xiao,Fangzhou Xu,Bochen Shan,Hua Yin
摘要
Accurate quantification of the citrus dropped number plays a vital role in evaluating the disaster resistance capabilities of citrus varieties and selecting superior cultivars. However, research in this critical area remains notably insufficient. To bridge this gap, we conducted in-depth experiments using a custom dataset of 1200 citrus images and proposed a lightweight YOLO-MECD model that is built upon the YOLOv11s architecture. Firstly, the EMA attention mechanism was introduced as a replacement for the traditional C2PSA attention mechanism. This modification not only enhances feature extraction capabilities and detection accuracy for citrus fruits but also achieves a significant reduction in model parameters. Secondly, we implemented a CSPPC module based on partial convolution to replace the original C3K2 module, effectively reducing both parameter count and computational complexity while maintaining mAP values. At last, the MPDIoU loss function was employed, resulting in improved bounding box detection accuracy and accelerated model convergence. Notably, our research reveals that reducing convolution operations in the backbone architecture substantially enhances small object detection capabilities and significantly decreases model parameters, proving more effective than the addition of small object detection heads. The experimental results and comparative analysis with similar network models indicate that the YOLO-MECD model has achieved significant improvements in both detection performance and computational efficiency. This model demonstrates excellent comprehensive performance in citrus object detection tasks, with a precision (P) of 84.4%, a recall rate (R) of 73.3%, and an elevated mean average precision (mAP) of 81.6%. Compared to the baseline, YOLO-MECD has improved by 0.2, 4.1, and 3.9 percentage points in detection precision, recall rate, and mAP value, respectively. Furthermore, the number of model parameters has been substantially reduced from 9,413,574 in YOLOv11s to 2,297,334 (a decrease of 75.6%), and the model size has been compressed from 18.2 MB to 4.66 MB (a reduction of 74.4%). Moreover, YOLO-MECD also demonstrates superior performance against contemporary models, with mAP improvements of 3.8%, 3.2%, and 5.5% compared to YOLOv8s, YOLOv9s, and YOLOv10s, respectively. The model’s versatility is evidenced by its excellent detection performance across various citrus fruits, including pomelos and kumquats. These achievements establish YOLO-MECD as a robust technical foundation for advancing citrus fruit detection systems and the development of smart orchards.