计算机科学
人工智能
过程(计算)
特征(语言学)
保险丝(电气)
计算机视觉
眼动
深度学习
帧(网络)
认知
模式识别(心理学)
电信
生物
操作系统
电气工程
工程类
哲学
语言学
神经科学
作者
Shuai Liu,Shichen Huang,Shuai Wang,Khan Muhammad,Paolo Bellavista,Javier Del Ser
标识
DOI:10.1016/j.inffus.2023.02.005
摘要
In recent years, deep learning has revolutionized computer vision and has been widely used for monitoring in diverse visual scenes. However, in terms of some aspects such as complexity and explainability, deep learning is not always preferable over traditional machine-learning methods. Traditional visual tracking approaches have shown certain advantages in terms of data collection efficiency, computing requirements, and power consumption and are generally easier to understand and explain than deep neural networks. At present, traditional feature-based techniques relying on correlation filtering (CF) have become common for understanding complex visual scenes. However, current CF algorithms use a single feature to describe the information of the target and locate it accordingly. They cannot fully express changeable target appearances in a complex scene, which can easily lead to inaccurate target locations in time-varying visual scenes. Moreover, owing to the complexity of surveillance scenes, monitoring algorithms can lose their target. The original template update strategy uses each frame with a fixed interval length as a new template, which may lead to unreliable feature extraction and low tracking accuracy. To overcome these issues, in this work, we introduce an original location fusion mechanism based on multiple visual cognition processing streams to achieve real-time and efficient visual monitoring in complex scenes. First, we propose a process for extracting multiple forms of visual cognitive information, and it is periodically used to extract multiple feature information flows of a target of interest. Subsequently, a cognitive information fusion process is employed to fuse the positioning results of different visual cognitive information flows to achieve high-quality visual monitoring and positioning. Finally, a novel feature template memory storage and retrieval strategy is adopted. When the location result is unreliable, the target is retrieved from memory to ensure robust and accurate tracking. In addition, we provide an extensive set of performance results showing that our proposed approach exhibits more robust performance at a lower computational cost compared with 36 state-of-the-art algorithms for visual tracking in complex scenes.
科研通智能强力驱动
Strongly Powered by AbleSci AI