方位(导航)
点(几何)
人工智能
分割
集合(抽象数据类型)
机器人
计算机视觉
计算机科学
模式识别(心理学)
数学
几何学
程序设计语言
作者
Zhuo Zhong,Juntao Xiong,Juntao Xiong,Bolin Liu,Shisheng Liao,Zhaowei Huo,Zhengang Yang
标识
DOI:10.1016/j.compag.2021.106398
摘要
The accurate identification of picking points is the key to the intelligent operation of litchi picking robot. To pick fruit, the robot must detect the location of picking point at first. To locate the location of picking points more accurately, this paper proposes a method of locating picking points based on the detection of litchi’s main fruit bearing branch (MFBB). In the natural environment, the MFBB of litchi are similar to non-MFBB, so it is easy to get incorrect MFBB visual detection result that leads to the failure of robot picking. To identify litchi’s MFBB in the natural environment quickly and accurately, this paper proposed a litchi’s MFBB detection method based on the YOLACT. Firstly, litchi fruit and MFBB were connected as a litchi cluster label, and the data set of litchi cluster and MFBB was established, so the YOLACT model could learn the connection relationship between fruit and MFBB from the data set. Then, based on the detection result of litchi cluster and MFBB segmentation mask by this model, the pixel width difference between fruit and MFBB was used to segment the part of litchi cluster mask belonging to the MFBB, to obtain a more complete MFBB and improve the recall rate of MFBB. Finally, the middle point of the MFBB mask was taken as the picking point, and the angle of the MFBB was determined by skeleton extraction and the least square fitting method to provide a reference for robot picking posture. The experimental results showed that the precision of picking points calculated by this method was 89.7%, the F1 score was 83.8%, and the average running time of a single image was 0.154 s. Indicating that the proposed method has a good detection performance for the litchi picking points, and it can provide technical support for the visual recognition of the litchi picking robot.
科研通智能强力驱动
Strongly Powered by AbleSci AI