人工智能
计算机视觉
计算机科学
分割
图像分割
事件(粒子物理)
图像分辨率
对象(语法)
目标检测
模式识别(心理学)
物理
量子力学
作者
Lin Zhu,Xianzhang Chen,Lizhi Wang,Xiao Wang,Yonghong Tian,Hua Huang
标识
DOI:10.1109/tpami.2024.3477591
摘要
Event cameras are novel bio-inspired sensors, where individual pixels operate independently and asynchronously, generating intensity changes as events. Leveraging the microsecond resolution (no motion blur) and high dynamic range (compatible with extreme light conditions) of events, there is considerable promise in directly segmenting objects from sparse and asynchronous event streams in various applications. However, different from the rich cues in video object segmentation, it is challenging to segment complete objects from the sparse event stream. In this paper, we present the first framework for continuous-time object segmentation from event stream. Given the object mask at the initial time, our task aims to segment the complete object at any subsequent time in event streams. Specifically, our framework consists of a Recurrent Temporal Embedding Extraction (RTEE) module based on a novel ResLSTM, a Cross-time Spatiotemporal Feature Modeling (CSFM) module which is a transformer architecture with long-term and short-term matching modules, and a segmentation head. The historical events and masks (reference sets) are recurrently fed into our framework along with current-time events. The temporal embedding is updated as new events are input, enabling our framework to continuously process the event stream. To train and test our model, we construct both real-world and simulated event-based object segmentation datasets, each comprising event streams, APS images, and object annotations. Extensive experiments on our datasets demonstrate the effectiveness of the proposed recurrent architecture. Our code and dataset are available at https://sites.google.com/view/ecos-net/.
科研通智能强力驱动
Strongly Powered by AbleSci AI