计算机科学
点云
人工智能
网格
计算机视觉
目标检测
变压器
图形
光学(聚焦)
分类
模式识别(心理学)
理论计算机科学
光学
物理
量子力学
电压
数学
几何学
作者
Junbo Yin,Jianbing Shen,Xin Gao,David J. Crandall,Ruigang Yang
标识
DOI:10.1109/tpami.2021.3125981
摘要
Previous works for LiDAR-based 3D object detection mainly focus on the single-frame paradigm. In this paper, we propose to detect 3D objects by exploiting temporal information in multiple frames, i.e., point cloud videos . We empirically categorize the temporal information into short-term and long-term patterns. To encode the short-term data, we present a Grid Message Passing Network (GMPNet), which considers each grid (i.e., the grouped points) as a node and constructs a $k$ -NN graph with the neighbor grids. To update features for a grid, GMPNet iteratively collects information from its neighbors, thus mining the motion cues in grids from nearby frames. To further aggregate long-term frames, we propose an Attentive Spatiotemporal Transformer GRU (AST-GRU), which contains a Spatial Transformer Attention (STA) module and a Temporal Transformer Attention (TTA) module. STA and TTA enhance the vanilla GRU to focus on small objects and better align moving objects. Our overall framework supports both online and offline video object detection in point clouds. We implement our algorithm based on prevalent anchor-based and anchor-free detectors. Evaluation results on the challenging nuScenes benchmark show superior performance of our method, achieving first on the leaderboard (at the time of paper submission) without any “bells and whistles.” Our source code is available at https://github.com/shenjianbing/GMP3D .
科研通智能强力驱动
Strongly Powered by AbleSci AI