计算机科学
水准点(测量)
人工智能
动作识别
最小边界框
光流
过程(计算)
动作(物理)
跳跃式监视
模式识别(心理学)
领域(数学)
运动(物理)
计算机视觉
机器学习
图像(数学)
班级(哲学)
数学
物理
量子力学
大地测量学
纯数学
地理
操作系统
作者
Sameh Megrhi,Marwa Jmal,Wided Souidène,Azeddine Beghdadi
标识
DOI:10.1016/j.jvcir.2016.10.016
摘要
Human action recognition is still attracting the computer vision research community due to its various applications. However, despite the variety of methods proposed to solve this problem, some issues still need to be addressed. In this paper, we present a human action detection and recognition process on large datasets based on Interest Points trajectories. In order to detect moving humans in moving field of views, a spatio-temporal action detection is performed basing on optical flow and dense speed-up-robust-features (SURF). Then, a video description based on a fusion process that combines motion, trajectory and visual descriptors is proposed. Features within each bounding box are extracted by exploiting the bag-of-words approach. Finally, a support-vector-machine is employed to classify the detected actions. Experimental results on the complex benchmark UCF101, KTH and HMDB51 datasets reveal that the proposed technique achieves better performances compared to some of the existing state-of-the-art action recognition approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI