计算机科学
可扩展性
架空(工程)
背景(考古学)
特征(语言学)
人工智能
增采样
特征提取
目标检测
卷积(计算机科学)
图像拼接
任务(项目管理)
组分(热力学)
代表(政治)
机器学习
数据挖掘
模式识别(心理学)
深度学习
特征学习
特征模型
对象(语法)
计算机工程
资源(消歧)
实时计算
分布式计算
空间分析
噪音(视频)
图像处理
计算机视觉
构造(python库)
图像(数学)
强化学习
作者
Yaping Zhang,Rui-qiang Guo,Min Li
标识
DOI:10.1088/1361-6501/ae1a07
摘要
Abstract With the advancement of smart agriculture, the accurate and rapid detection of rice diseases has become essential for ensuring food security. Deep learning has made significant progress in object detection, offering improved performance in recent years. However, most existing methods struggle to balance model size, detection accuracy, and processing speed, limiting their practical application in resource-constrained environments. To address this challenge, we propose a lightweight and efficient network, termed LCE-Net (Lightweight Convolution-Efficient Network), designed specifically for rice disease detection. The backbone of LCE-Net incorporates a scalable module called Scaling RepGhost-CSPELAN (SRG-CSPELAN), which enhances gradient flow and strengthens feature extraction while maintaining model compactness. To further improve performance, we introduce an Attention-based Internal Feature Interaction (AIFI) structure. This component leverages attention mechanisms to reduce computational overhead while enhancing the model’s ability to identify critical features. Additionally, we adopt an improved adaptive downsampling convolution to efficiently reduce feature map dimensions without losing essential spatial information. A context anchor attention mechanism is also integrated to boost feature representation in central regions and improve resource utilization. Finally, we design a Dynamic Task-Aligned Detection Head that combines task collaboration with adaptive computation. This design helps strike a practical balance between accuracy and efficiency. We evaluated LCE-Net on both a public rice disease dataset and a self-constructed dataset. Experimental results demonstrate that LCE-Net outperforms several state-of-the-art methods in both accuracy and detection speed. The model achieved 95.0% accuracy with 0.1901 s per image on the public dataset and 98.6% accuracy with 0.0106 s per image on the self-built dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI