计算机科学
背景(考古学)
工作流程
机器人
分割
人工智能
特征(语言学)
实时计算
计算机视觉
机器学习
数据库
语言学
生物
哲学
古生物学
作者
Mikhail Volkov,Daniel A. Hashimoto,Guy Rosman,Ozanan R. Meireles,Daniela Rus
标识
DOI:10.1109/icra.2017.7989093
摘要
Context-aware segmentation of laparoscopic and robot assisted surgical video has been shown to improve performance and perioperative workflow efficiency, and can be used for education and time-critical consultation. Modern pressures on productivity preclude manual video analysis, and hospital policies and legacy infrastructure are often prohibitive of recording and storing large amounts of data. In this paper we present a system that automatically generates a video segmentation of laparoscopic and robot-assisted procedures according to their underlying surgical phases using minimal computational resources, and low amounts of training data. Our system uses an SVM and HMM in combination with an augmented feature space that captures the variability of these video streams without requiring analysis of the nonrigid and variable environment. By using the data reduction capabilities of online k-segment coreset algorithms we can efficiently produce results of approximately equal quality, in realtime. We evaluate our system in cross-validation experiments and propose a blueprint for piloting such a system in a real operating room environment with minimal risk factors.
科研通智能强力驱动
Strongly Powered by AbleSci AI