计算机科学
强化学习
云计算
能源消耗
马尔可夫决策过程
调度(生产过程)
延迟(音频)
移动边缘计算
分布式计算
服务器
边缘计算
高效能源利用
计算机网络
马尔可夫过程
人工智能
生态学
电信
统计
运营管理
数学
电气工程
经济
生物
工程类
操作系统
作者
Shanchen Pang,Lili Hou,Haiyuan Gui,Xiao He,Teng Wang,Yawu Zhao
标识
DOI:10.1016/j.comcom.2023.11.013
摘要
The wide application of edge cloud computing in the Internet of Vehicles (IoV) provides lower latency, more efficient computing power, and more reliable data transmission services for vehicle applications. Realistic vehicle applications frequently consist of multiple tasks with dependencies. Efficiently and quickly scheduling individual tasks for multiple vehicle applications to reduce latency and energy consumption is challenging. Our proposed approach leverages Deep Reinforcement Learning (DRL) to develop a task scheduling strategy that ensures real-time and efficient operations. We maximize the utilization of available resources by harnessing the computational capabilities of vehicles, multiple MEC servers, and a cloud server. Specifically, we model the dependencies of tasks using a Directed Acyclic Graph (DAG) and design dynamically adjustable weights for delay and energy consumption. Transforming the task offloading problem in a vehicle-edge-cloud environment, which considers dependencies, into a Markov Decision Process (MDP) enables us to tackle it effectively. To obtain optimized offloading decisions quickly, we employ Double Deep Q-Network (DDQN) along with specially designed mobility management strategies. A penalty mechanism is introduced in DDQN to impose penalties when the vehicle application is delayed beyond its deadline. Simulation results show that the proposed scheme can significantly decrease the latency and energy consumption of multi-applications compared to the other three schemes and ensure the successful execution of tasks.
科研通智能强力驱动
Strongly Powered by AbleSci AI