计算机视觉
杠杆(统计)
人工智能
计算机科学
目标检测
像素
感知
光学(聚焦)
边距(机器学习)
模式识别(心理学)
机器学习
生物
光学
物理
神经科学
作者
Lei Yang,Kaicheng Yu,Tao Tang,Jun Li,Kun Yuan,Li Wang,Xinyu Zhang,Peng Chen
标识
DOI:10.1109/cvpr52729.2023.02070
摘要
While most recent autonomous driving system focuses on developing perception methods on ego-vehicle sensors, people tend to overlook an alternative approach to leverage intelligent roadside cameras to extend the perception ability beyond the visual range. We discover that the state-of-the-art vision-centric bird's eye view detection methods have inferior performances on roadside cameras. This is because these methods mainly focus on recovering the depth regarding the camera center, where the depth difference between the car and the ground quickly shrinks while the distance increases. In this paper, we propose a simple yet effective approach, dubbed BEVHeight, to address this issue. In essence, instead of predicting the pixel-wise depth, we regress the height to the ground to achieve a distance-agnostic formulation to ease the optimization process of camera-only perception methods. On popular 3D detection benchmarks of roadside cameras, our method surpasses all previous vision-centric methods by a significant margin. The code is available at https://github.com/ADLab-AutoDrive/BEVHeight.
科研通智能强力驱动
Strongly Powered by AbleSci AI