RGB颜色模型
开枪
计算机视觉
人工智能
数学
计算机图形学(图像)
领域(数学)
计算机科学
遥感
园艺
地理
生物
纯数学
作者
Yatao Li,Leiying He,Jiangming Jia,Jun Lv,Jianneng Chen,Xin Qiao,Chuanyu Wu
标识
DOI:10.1016/j.compag.2021.106149
摘要
Tea shoot detection and localization are highly challenging tasks because of varying illumination, inevitable occlusion, tiny targets, and dense growth. To achieve the automatic plucking of tea shoots in a tea garden, a reliable algorithm based on red, green, blue-depth (RGB-D) camera images was developed to detect and locate tea shoots in fields for tea harvesting robots. In this study, labeling criteria were first established for the images collected for multiple periods and varieties in the tea garden. Then, a “you only look once” (YOLO) network was used to detect tea shoot (one bud with one leaf) regions on RGB images collected by an RGB-D camera. Additionally, the detection precision for tea shoots was 93.1% and the recall rate was 89.3%. To achieve the three-dimensional (3D) localization of the plucking position, 3D point clouds of the detected target regions were acquired by fusing the depth image and RGB image captured by an RGB-D camera. Then, noise was removed using point cloud pre-processing and the point cloud of the tea shoots was obtained using Euclidean clustering processing and a target point cloud extraction algorithm. Finally, the 3D plucking position of the tea shoots was determined by combining the tea growth characteristics, point cloud features, and sleeve plucking scheme, which solved the problem that the plucking point may be invisible in fields. To verify the effectiveness of the proposed algorithm, tea shoot localization and plucking experiments were conducted in the tea garden. The plucking success rate for tea shoots was 83.18% and the average localization time for each target was about 24 ms. All the results demonstrate that the proposed method could be used for robotic tea plucking.
科研通智能强力驱动
Strongly Powered by AbleSci AI