计算机科学
人工智能
计算机视觉
点云
跟踪(教育)
元数据
弹道
由运动产生的结构
运动(物理)
匹配移动
激光雷达
地理
遥感
心理学
教育学
物理
天文
操作系统
作者
Ming-Fang Chang,John Lambert,Patsorn Sangkloy,Jagjeet Singh,Sławomir Bąk,Andrew T. Hartnett,Wang De,Peter Carr,Simon Lucey,Deva Ramanan,James Hays
出处
期刊:Cornell University - arXiv
日期:2019-01-01
被引量:170
标识
DOI:10.48550/arxiv.1911.02620
摘要
We present Argoverse -- two datasets designed to support autonomous vehicle machine learning tasks such as 3D tracking and motion forecasting. Argoverse was collected by a fleet of autonomous vehicles in Pittsburgh and Miami. The Argoverse 3D Tracking dataset includes 360 degree images from 7 cameras with overlapping fields of view, 3D point clouds from long range LiDAR, 6-DOF pose, and 3D track annotations. Notably, it is the only modern AV dataset that provides forward-facing stereo imagery. The Argoverse Motion Forecasting dataset includes more than 300,000 5-second tracked scenarios with a particular vehicle identified for trajectory forecasting. Argoverse is the first autonomous vehicle dataset to include "HD maps" with 290 km of mapped lanes with geometric and semantic metadata. All data is released under a Creative Commons license at www.argoverse.org. In our baseline experiments, we illustrate how detailed map information such as lane direction, driveable area, and ground height improves the accuracy of 3D object tracking and motion forecasting. Our tracking and forecasting experiments represent only an initial exploration of the use of rich maps in robotic perception. We hope that Argoverse will enable the research community to explore these problems in greater depth.
科研通智能强力驱动
Strongly Powered by AbleSci AI