事件(粒子物理)
计算机科学
计算机视觉
人工智能
物理
量子力学
作者
Yueyi Zhang,Jin Wang,Wenming Weng,Xiaoyan Sun,Zhiwei Xiong
标识
DOI:10.1109/tnnls.2025.3543381
摘要
Recent research has explored leveraging event cameras, known for their prowess in capturing scenes with nonuniform motion, for video deraining, leading to performance improvements. However, the existing event-based method still faces the challenge that the complex spatiotemporal distribution disrupts temporal information fusion and complicates feature separation. This article proposes a novel end-to-end learning framework for video deraining that effectively extracts the rich dynamic information provided by the event stream. Our framework incorporates two key modules: an event-aware motion detection (EAMD) module that adaptively aggregates multiframe motion information using event-driven masks and a pyramidal adaptive selection module that separates background and rain layers by leveraging contextual priors from both event and conventional camera data. To facilitate efficient training, we introduce a real-world dataset of synchronized rainy videos and event streams. Extensive evaluations on both synthetic and real-world datasets demonstrate the superiority of our proposed method compared to state-of-the-art approaches. The code is available at https://github.com/booker-max/EGVD.
科研通智能强力驱动
Strongly Powered by AbleSci AI