计算机科学
弹道
自回归模型
推论
变压器
人工智能
机器学习
数据建模
工程类
数学
物理
天文
电压
数据库
电气工程
计量经济学
作者
Xiaobo Chen,Huanjia Zhang,Feng Zhao,Yingfeng Cai,Hai Wang,Qiaolin Ye
出处
期刊:IEEE Transactions on Instrumentation and Measurement
[Institute of Electrical and Electronics Engineers]
日期:2022-01-01
卷期号:71: 1-12
被引量:17
标识
DOI:10.1109/tim.2022.3192056
摘要
As a core function of autonomous driving and the internet of vehicles, accurately predicting the trajectory of vehicles can significantly improve traffic safety and reduce crash injuries. In this paper, we propose an intention-aware non-autoregressive Transformer model with multi-attention learning for multi-modal vehicle trajectory prediction. We first present social attention learning where graph attention is properly integrated with the Transformer encoder so as to model the social interaction between vehicles. Then, the social and temporal dependency across consecutive frames is captured by temporal attention learning. The above social and temporal attention modules can be interleaved and stacked to achieve the coupled modeling and thus extract abundant features from trajectory data. To implement precise prediction as well as efficient inference, we further put forward an intention-aware decoder query generation approach to produce multiple possible trajectories concurrently. Finally, cross-attention learning is devised to make full use of the encoded features, therefore, yielding future predictions. The proposed model is evaluated on two large-scale vehicle trajectory datasets and the experimental results verify that our algorithm achieves better performance compared with some state-of-the-art models. The root-mean-square error (RMSE) of the predicted trajectory over 5s time horizon for the NGSIM and HighD datasets is 3.43m and 1.10m, respectively.
科研通智能强力驱动
Strongly Powered by AbleSci AI