强化学习
计算机科学
任务(项目管理)
人工智能
车载自组网
多任务学习
机器学习
无线
无线自组网
电信
经济
管理
作者
Mina Khoshbazm Farimani,Soroush Karimian-Aliabadi,Reza Entezari‐Maleki,Bernhard Egger,Leonel Sousa
标识
DOI:10.1016/j.eswa.2024.123622
摘要
Smart vehicles have a rising demand for computation resources, and recently vehicular edge computing has been recognized as an effective solution. Edge servers deployed in roadside units are capable of accomplishing tasks beyond the capacity which is embedded inside the vehicles. However, the main challenge is to carefully select the tasks to be offloaded considering the deadlines, and in order to reduce energy consumption, while delivering a good performance. In this paper, we consider a vehicular edge computing network in which multiple cars are moving at non-constant speed and produce tasks at each time slot. Then, we propose a task offloading algorithm, aware of the vehicle's direction, based on Rainbow, a deep Q-learning algorithm combining several independent improvements to the deep Q-network algorithm. This is to overcome the conventional limits and to reach an optimal offloading policy, by effectively incorporating the computation resources of edge servers to jointly minimize average delay and energy consumption. Real-world traffic data is used to evaluate the performance of the proposed approach compared to other algorithms, in particular deep Q-network, double deep Q-network, and deep recurrent Q-network. Results of the experiments show an average reduction of 18% and 15% in energy consumption and delay, respectively, when using the proposed Rainbow deep Q-network based algorithm in comparison to the state-of-the-art. Moreover, the stability and convergence of the learning process have significantly improved by adopting the Rainbow algorithm.
科研通智能强力驱动
Strongly Powered by AbleSci AI