碰撞
计算机科学
人工智能
单眼
稳健性(进化)
计算机视觉
基本事实
光流
聚类分析
像素
特征(语言学)
模式识别(心理学)
机器学习
图像(数学)
计算机安全
生物化学
化学
语言学
哲学
基因
作者
Changlin Li,Yeqiang Qian,Cong Sun,Weihao Yan,Chunxiang Wang,Ming Yang
标识
DOI:10.1109/iros55552.2023.10341966
摘要
Vision-based collision prediction for autonomous driving is a challenging task due to the dynamic movement of vehicles and diverse types of obstacles. Most existing methods rely on object detection algorithms, which only predict predefined collision targets, such as vehicles and pedestrians, and cannot anticipate emergencies caused by unknown obstacles. To address this limitation, we propose a novel approach using pixel-wise time-to-collision (TTC) estimation for monocular collision prediction (TTC4MCP). Our approach predicts TTC and optical flow from monocular images and identifies potential collision areas using feature clustering and motion analysis. To overcome the challenge of training TTC estimation models without ground truth data in new scenes, we propose a self-supervised TTC training method, enabling collision prediction in a wider range of scenarios. TTC4MCP is evaluated on multiple road conditions and demonstrates promising results in terms of accuracy and robustness.
科研通智能强力驱动
Strongly Powered by AbleSci AI