计算机科学
人工智能
稳健性(进化)
特征提取
判别式
事件(粒子物理)
计算机视觉
机器学习
模式识别(心理学)
光流
边距(机器学习)
特征(语言学)
图像(数学)
量子力学
生物化学
基因
物理
哲学
语言学
化学
作者
Yongjian Deng,Hao Chen,Huiying Chen,Youfu Li
标识
DOI:10.1109/tip.2021.3077136
摘要
Event cameras have recently drawn massive attention in the computer vision community because of their low power consumption and high response speed. These cameras produce sparse and non-uniform spatiotemporal representations of a scene. These characteristics of representations make it difficult for event-based models to extract discriminative cues (such as textures and geometric relationships). Consequently, event-based methods usually perform poorly compared to their conventional image counterparts. Considering that traditional images and event signals share considerable visual information, this paper aims to improve the feature extraction ability of event-based models by using knowledge distilled from the image domain to additionally provide explicit feature-level supervision for the learning of event data. Specifically, we propose a simple yet effective distillation learning framework, including multi-level customized knowledge distillation constraints. Our framework can significantly boost the feature extraction process for event data and is applicable to various downstream tasks. We evaluate our framework on high-level and low-level tasks, i.e., object classification and optical flow prediction. Experimental results show that our framework can effectively improve the performance of event-based models on both tasks by a large margin. Furthermore, we present a 10K dataset (CEP-DVS) for event-based object classification. This dataset consists of samples recorded under random motion trajectories that can better evaluate the motion robustness of the event-based model and is compatible with multi-modality vision tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI