计算机科学
人工智能
稳健性(进化)
水准点(测量)
变更检测
卷积神经网络
机器学习
深度学习
特征提取
编码器
计算机视觉
模式识别(心理学)
生物化学
化学
大地测量学
基因
地理
操作系统
作者
Murari Mandal,Santosh Kumar Vipparthi
标识
DOI:10.1109/tits.2020.3030801
摘要
Visual change detection in video is one of the essential tasks in computer vision applications. Recently, a number of supervised deep learning methods have achieved top performance over the benchmark datasets for change detection. However, inconsistent training-testing data division schemes adopted by these methods have led to documentation of incomparable results. We address this crucial issue through our own propositions for benchmark comparative analysis. The existing works have evaluated the model in scene dependent evaluation setup which makes it difficult to assess the generalization capability of the model in completely unseen videos. It also leads to inflated results. Therefore, in this paper, we present a completely scene independent evaluation strategy for a comprehensive analysis of the model design for change detection. We propose well-defined scene independent and scene dependent experimental frameworks for training and evaluation over the benchmark CDnet 2014, LASIESTA and SBMI2015 datasets. A cross-data evaluation is performed with PTIS dataset to further measure the robustness of the models. We designed a fast and lightweight online end-to-end convolutional network called ChangeDet (speed-58.8 fps and model size-1.59 MB) in order to achieve robust performance in completely unseen videos. The ChangeDet estimates the background through a sequence of maximum multi-spatial receptive feature (MMSR) blocks using past temporal history. The contrasting features are produced through the assimilation of temporal median and contemporary features from the current frame. Further, these features are processed through an encoder-decoder to detect pixel-wise changes. The proposed ChangeDet outperforms the existing state-of-the-art methods in all four benchmark datasets.
科研通智能强力驱动
Strongly Powered by AbleSci AI