计算机科学
计算卸载
边缘计算
强化学习
分布式计算
服务器
移动边缘计算
调度(生产过程)
服务质量
计算机网络
能源消耗
计算
延迟(音频)
GSM演进的增强数据速率
人工智能
数学优化
电信
生物
数学
生态学
算法
作者
Liwei Geng,Hongbo Zhao,Jiayue Wang,Aryan Kaushik,S. W. K. Yuan,Wenquan Feng
标识
DOI:10.1109/jiot.2023.3247013
摘要
Vehicular edge computing has emerged as a promising paradigm by offloading computation-intensive latency-sensitive tasks to mobile-edge computing (MEC) servers. However, it is difficult to provide users with excellent Quality-of-Service (QoS) by relying only on these server resources. Therefore, in this article, we propose to formulate the computation offloading policy based on deep reinforcement learning (DRL) in a vehicle-assisted vehicular edge computing network (VAEN) where idle resources of vehicles are deemed as edge resources. Specifically, each task is represented by a directed acyclic graph (DAG) and offloaded to edge nodes according to our proposed subtask scheduling priority algorithm. Further, we formalize the computation offloading problem under the constraints of candidate service vehicle models, which aims to minimize the long-term system cost, including delay and energy consumption. To this end, we propose a distributed computation offloading algorithm based on multiagent DRL (DCOM), where an improved actor–critic network (IACN) is devised to extract features, and a joint mechanism of prioritized experience replay and adaptive $n$ -step learning (JMPA) is proposed to enhance learning efficiency. The numerical simulations demonstrate that, in VAEN scenario, DCOM achieves significant decrements in the latency and energy consumption compared with other advanced benchmark algorithms.
科研通智能强力驱动
Strongly Powered by AbleSci AI