智能交通系统
人工神经网络
GSM演进的增强数据速率
计算机科学
树(集合论)
张量(固有定义)
人工智能
工程类
数学
运输工程
几何学
组合数学
作者
Debin Liu,Laurence T. Yang,Ruonan Zhao,Xianjun Deng,Chenlu Zhu,Yiheng Ruan
标识
DOI:10.1109/tits.2024.3364250
摘要
Recurrent neural networks (RNNs) and their variants can efficiently capture the features of time-series characteristic data and are widely used for intelligent transportation tasks. Internet of Vehicles (IoV) edge devices deploying RNN models are an important impetus for the development of intelligent transportation system (ITS) and provide convenient services for users and managers. However, the input data of some transportation tasks have high dimensional characteristics, resulting in the number of training parameters and computational complexity of RNN models being too large, making it difficult to deploy high-performance RNN models on resource-constrained IoV edge devices. To overcome this problem, we compress the training parameters of the RNN model using the proposed multi-tree compact hierarchical tensor representation-Dtensor Block Decomposition (DBD), which reduces the computational complexity of the model and speeds up the training process of the model, thus making the network model lightweight. We evaluate the performance of Dtensor Block-Long Short-Term Memory (DB-LSTM) and Improved Dtensor Block-LSTM (IDB-LSTM) models on multiple real datasets and compare them with the current state-of-the-art LSTM compression models. Experimental results demonstrate that our proposed method can massively compress the number of training parameters of the models on different datasets and shorten the training time of the models without degrading the testing accuracy of the models. In addition, our proposed DB-LSTM and IDB-LSTM models have better comprehensive performance compared with other models and are more suitable for deployment on resource-constrained IoV edge devices.
科研通智能强力驱动
Strongly Powered by AbleSci AI