计算机科学
水准点(测量)
语义学(计算机科学)
语义映射
可扩展性
基线(sea)
人工智能
激光雷达
组分(热力学)
点云
可视化
机器学习
数据库
遥感
地图学
物理
地质学
海洋学
热力学
程序设计语言
地理
作者
Qi Li,Yue Wang,Yilun Wang,Hang Zhao
标识
DOI:10.1109/icra46639.2022.9812383
摘要
Constructing HD semantic maps is a central component of autonomous driving. However, traditional pipelines require a vast amount of human efforts and resources in annotating and maintaining the semantics in the map, which limits its scalability. In this paper, we introduce the problem of HD semantic map learning, which dynamically constructs the local semantics based on onboard sensor observations. Meanwhile, we introduce a semantic map learning method, dubbed HDMapNet. HDMapNet encodes image features from surrounding cameras and/or point clouds from LiDAR, and predicts vectorized map elements in the bird's-eye view. We benchmark HDMapNet on nuScenes dataset and show that in all settings, it performs better than baseline methods. Of note, our camera-LiDAR fusion-based HDMapNet outperforms existing methods by more than 50 % in all metrics. In addition, we develop semantic-level and instance-level metrics to evaluate the map learning performance. Finally, we showcase our method is capable of predicting a locally consistent map. By introducing the method and metrics, we invite the community to study this novel map learning problem.
科研通智能强力驱动
Strongly Powered by AbleSci AI