图像拼接
计算机视觉
人工智能
计算机科学
目标检测
模块化设计
折反射系统
图像传感器
视野
分割
工程类
石油工程
镜头(地质)
操作系统
作者
Christian Kinzig,Irene Cortés,Carlos Fernández,Martin Lauer
标识
DOI:10.23919/fusion49751.2022.9841307
摘要
Autonomous vehicles depend on an accurate perception of their surroundings. For this purpose, different approaches are used to detect traffic participants such as cars, cyclists, and pedestrians, as well as static objects. A commonly used method is object detection and classification in camera images. However, due to the limited field of view of camera images, detecting in the entire environment of the ego-vehicle is an additional challenge. Some solutions include the use of catadioptric cameras or clustered surround view camera systems that require a large installation height. In multi-camera setups, an additional step is required to merge objects from overlapping areas between cameras. As an alternative to these systems, we present a real-time capable image stitching method to improve the horizontal field of view for object detection in autonomous driving. To do this, we use a spherical camera model and determine the overlapping area of the neighboring images based on the calibration. Furthermore, lidar measurements are used to improve image alignment. Finally, seam carving is applied to optimize the transition between the images. We tested our approach on a modular redundant sensor platform and on the publicly available nuScenes dataset. In addition to qualitative results, we evaluated the stitched images using an object detection network. Moreover, the real-time capability of our image stitching method is shown in a runtime analysis.
科研通智能强力驱动
Strongly Powered by AbleSci AI