计算机视觉
人工智能
帧(网络)
计算机科学
对象(语法)
摄像机自动校准
事件(粒子物理)
融合
目标检测
图像融合
传感器融合
计算机图形学(图像)
摄像机切除
模式识别(心理学)
物理
图像(数学)
电信
语言学
哲学
量子力学
作者
Haixin Sun,Songming Chen,Minh-Quan Dao,Vincent Frémont
标识
DOI:10.1109/soli60636.2023.10425107
摘要
Moving object detection is a crucial task for autonomous vehicles. Indeed, dynamic objects represent higher collision risk than static ones, so the trajectories of the vehicles must be planned according to the motion forecasting of the moving participants of the scene. For the traditional frame-based cameras, images can provide accumulated pixel brightness without temporal information between them. The optical flow computation is used as the inter-frame motion information. Interestingly, event-based camera can preserve the motion information by delivering the precise timestamp of each asynchronous event data, which is more suitable for the motion analysis. Also, the event-based cameras' high temporal resolution and high dynamic range allow them to work in fast motion and extreme light scenarios. In this work, we propose a new Deep Neural Network, called EV-FuseMODNet for Moving Object Detection (MOD) that captures motion and appearance information from both event-based and frame-based cameras. The proposed method has been evaluated with the extended KittiMoSeg dataset and the generated dark KITTI sequence. An overall 27.5% relative improvement on the extended KittiMoSeg dataset compared to the state-of-the-art approaches has been achieved. The code is released in https://github.com/adosum/EV-FuseMODNet.
科研通智能强力驱动
Strongly Powered by AbleSci AI