强化学习
计算机科学
人工神经网络
调度(生产过程)
图形
人工智能
分布式计算
理论计算机科学
运营管理
经济
作者
Zhang Liu,Lianfen Huang,Zhibin Gao,Manman Luo,Seyyedali Hosseinalipour,Huaiyu Dai
标识
DOI:10.1109/tnsm.2024.3387707
摘要
Vehicular Clouds (VCs) are modern platforms for processing of computation-intensive tasks over vehicles. Such tasks are often represented as Directed Acyclic Graphs (DAGs) consisting of interdependent vertices/subtasks and directed edges. However, efficient scheduling of DAG tasks over VCs presents significant challenges, mainly due to the dynamic service provisioning of vehicles within VCs and non-Euclidean representation of DAG tasks' topologies. In this paper, we propose a Graph neural network-Augmented Deep Reinforcement Learning scheme (GA-DRL) for the timely scheduling of DAG tasks over dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process. We then adopt a multi-head Graph ATtention network (GAT) to extract the features of DAG subtasks. Our developed GAT enables a two-way aggregation of the topological information in a DAG task by simultaneously considering predecessors and successors of each subtask. We further introduce non-uniform DAG neighborhood sampling through codifying the scheduling priority of different subtasks, which makes our developed GAT generalizable to completely unseen DAG task topologies. Finally, we augment GAT into a double deep Q-network learning module to conduct subtask-to-vehicle assignment according to the extracted features of subtasks, while considering the dynamics and heterogeneity of the vehicles in VCs. Through simulating various DAG tasks under real-world movement traces of vehicles, we demonstrate that GA-DRL outperforms existing benchmarks in terms of DAG task completion time.
科研通智能强力驱动
Strongly Powered by AbleSci AI