关系(数据库)
计算机科学
人工智能
RGB颜色模型
像素
特征(语言学)
帧(网络)
代表(政治)
特征学习
模式识别(心理学)
滤波器(信号处理)
计算机视觉
数据挖掘
电信
政治
哲学
语言学
法学
政治学
作者
Limin Wang,Wei Li,Wen Li,Luc Van Gool
标识
DOI:10.1109/cvpr.2018.00155
摘要
Spatiotemporal feature learning in videos is a fundamental problem in computer vision. This paper presents a new architecture, termed as Appearance-and-Relation Network (ARTNet), to learn video representation in an end-to-end manner. ARTNets are constructed by stacking multiple generic building blocks, called as SMART, whose goal is to simultaneously model appearance and relation from RGB input in a separate and explicit manner. Specifically, SMART blocks decouple the spatiotemporal learning module into an appearance branch for spatial modeling and a relation branch for temporal modeling. The appearance branch is implemented based on the linear combination of pixels or filter responses in each frame, while the relation branch is designed based on the multiplicative interactions between pixels or filter responses across multiple frames. We perform experiments on three action recognition benchmarks: Kinetics, UCF101, and HMDB51, demonstrating that SMART blocks obtain an evident improvement over 3D convolutions for spatiotemporal feature learning. Under the same training setting, ARTNets achieve superior performance on these three datasets to the existing state-of-the-art methods.1
科研通智能强力驱动
Strongly Powered by AbleSci AI