动作(物理)
计算机科学
计算机视觉
人工智能
物理
量子力学
作者
Jialun Pei,Jiaan Zhang,Guanyi Qin,Kai Wang,Yueming Jin,Pheng‐Ann Heng
标识
DOI:10.1109/tmi.2025.3590457
摘要
Surgical action triplet detection offers intuitive intraoperative scene analysis for dynamically perceiving laparoscopic surgical workflows and analyzing the interaction between instruments and tissues. The current challenge of this task lies in simultaneously localizing surgical instruments while performing more accurate surgical triplet recognition to enhance a comprehensive understanding of intraoperative surgical scenes. To fully leverage the spatial localization of surgical instruments for associating with triplet detection, we propose an Instrument-Tissue-Guided Triplet detector, termed ITG-Trip, which navigates the confluence of surgical action cues through instrument and tissue pseudo-localization labeling to optimize action triplet detection. For exploiting textual and temporal trails, our framework embraces a Visual-Linguistic Association (VLA) module that exploits a pre-trained text encoder to distill textual prior knowledge, enhancing semantic information in global visual features and compensating rare interaction class perception. Besides, we introduce a Mamba-enhanced Spatial-temporal Perception (MSP) decoder, which weaves Mamba and Transformer blocks to explore subject- and object-aware spatial and temporal information to improve the accuracy of action triplet detection in long-time sequence surgical videos. Experimental results on the CholecT50 benchmark indicate that our method significantly outperforms existing state-of-the-art methods in both instrument localization and action triplet detection. The code is available at: github.com/PJLallen/ITG-Trip.
科研通智能强力驱动
Strongly Powered by AbleSci AI