机器人
计算机科学
人工智能
搜救
对象(语法)
透视图(图形)
救援机器人
面子(社会学概念)
机器人学习
人机交互
计算机视觉
机器学习
移动机器人
社会科学
社会学
作者
M. I. R. Shuvo,Bailey Wimer,Saifuddin Mahmud,Jonghoon Kim
标识
DOI:10.1109/iecon51785.2023.10312013
摘要
Robots may face difficulty detecting all the required objects with their current vision in a search and rescue (SAR) operation in a disaster scenario. This can be due to a partial view of some objects from the whole disaster scenario, or the training model of the robots may not be perfect enough to recognize all the available objects. This research established a method for SAR robots to learn without human interaction by combining their own and other robots' knowledge. It assumes that all robots have a simple machine learning model that can detect objects from their whole perspective. Yolov8, trained with a custom dataset, has been used as the basic machine learning model in this study to detect the object. Two robots and one object are used for the proof of concept, with one robot seeing the object fully and the other partially. Partial view images were auto-labeled, and the fundamental machine learning model of the robot with partial view was retrained using auto-labeled data until a predetermined condition was met. Then the other robot in the environment received the final retrained model with partial view information. The suggested system was tested in numerous scenarios and usually works well.
科研通智能强力驱动
Strongly Powered by AbleSci AI