情态动词
计算机科学
传感器融合
信息融合
光学(聚焦)
加速度
任务(项目管理)
旋转(数学)
人工智能
运动学
计算机视觉
人机交互
系统工程
工程类
化学
物理
经典力学
高分子化学
光学
作者
Xinyu Zhang,Yan Gong,Jin‐Jian Lu,Jian Wu,Zhiwei Li,Dafeng Jin,Jun Li
出处
期刊:IEEE transactions on intelligent vehicles
[Institute of Electrical and Electronics Engineers]
日期:2023-06-01
卷期号:8 (6): 3605-3619
被引量:1
标识
DOI:10.1109/tiv.2023.3268051
摘要
Multi-modal fusion is a basic task of autonomous driving system perception, which has attracted many scholars' attention in recent years. The current multi-modal fusion methods mainly focus on camera data and LiDAR data, but pay little attention to the kinematic information provided by the sensors of the vehicle, such as acceleration, vehicle speed, angle of rotation. These information are not affected by complex external scenes, so it is more robust and reliable. In this article, we introduce the existing application fields of vehicle information and the research progress of related methods, as well as the multi-modal fusion methods based on information. We also introduced the relevant information of the vehicle information dataset in detail to facilitate the research as soon as possible. In addition, new future ideas of multi-modal fusion technology for autonomous driving tasks are proposed to promote the further utilization of vehicle information.
科研通智能强力驱动
Strongly Powered by AbleSci AI