判别式
人工智能
计算机科学
联营
规范化(社会学)
卷积神经网络
模式识别(心理学)
深度学习
稳健性(进化)
动作识别
特征(语言学)
特征学习
基因
人类学
哲学
社会学
生物化学
语言学
化学
班级(哲学)
作者
Limin Wang,Yu Qiao,Xiaoou Tang
标识
DOI:10.1109/cvpr.2015.7299059
摘要
Visual features are of vital importance for human action understanding in videos. This paper presents a new video representation, called trajectory-pooled deep-convolutional descriptor (TDD), which shares the merits of both hand-crafted features and deep-learned features. Specifically, we utilize deep architectures to learn discriminative convolutional feature maps, and conduct trajectory-constrained pooling to aggregate these convolutional features into effective descriptors. To enhance the robustness of TDDs, we design two normalization methods to transform convolutional feature maps, namely spatiotemporal normalization and channel normalization. The advantages of our features come from (i) TDDs are automatically learned and contain high discriminative capacity compared with those hand-crafted features; (ii) TDDs take account of the intrinsic characteristics of temporal dimension and introduce the strategies of trajectory-constrained sampling and pooling for aggregating deep-learned features. We conduct experiments on two challenging datasets: HMDB51 and UCF101. Experimental results show that TDDs outperform previous hand-crafted features and deep-learned features. Our method also achieves superior performance to the state of the art on these datasets (HMDB51 65.9%, UCF101 91.5%).
科研通智能强力驱动
Strongly Powered by AbleSci AI