计算机科学
编码器
分割
人工智能
变压器
弹道
行人
机器学习
跳跃式监视
上下文模型
对象(语法)
工程类
物理
电气工程
电压
操作系统
运输工程
天文
作者
Ze Sui,Yue Zhou,Xu Zhao,Ao Chen,Yiyang Ni
标识
DOI:10.1109/iros51168.2021.9636241
摘要
Although autonomous driving technology has made tremendous progress in recent years, it is still challenging to predict the intentions and trajectories of pedestrians. The state-of-the-art methods suffer from two problems. (1) Existing works consider these two tasks separately, ignoring the connection between them. (2) The selection and integration of inputs for these tasks are not well designed. In this paper, these two tasks are taken into consideration in a unified model. In this way, the information provided by the labels of each other is shared, improving the performance of both tasks. Besides, in addition to the bounding boxes and speeds, orientation and road semantic segmentation features are taken into consideration to show the potential intention and road context of the pedestrian. And all the inputs are weighted by an attention module before integration. Meanwhile, a Transformer encoder is applied in our method to extract the temporal information from the fused feature sequence. Our method outperforms all previous models for both trajectory prediction and intention prediction tasks on the JAAD dataset and PIE dataset.
科研通智能强力驱动
Strongly Powered by AbleSci AI