编配
计算机科学
任务(项目管理)
边缘计算
GSM演进的增强数据速率
推论
资源(消歧)
车载自组网
计算机网络
分布式计算
人工智能
无线自组网
电信
无线
工程类
艺术
视觉艺术
系统工程
音乐剧
作者
Wenhao Fan,Yu Yang,Chenhui Bao,Yuanan Liu
标识
DOI:10.1109/tmc.2025.3572296
摘要
Vehicular edge intelligence, distinct from traditional edge intelligence, exhibits unique characteristics, including the mobility of vehicles, uneven spatial and temporal distribution of vehicles, and variability in the AI models deployed on vehicles, Roadside Units (RSUs), and edge servers (ESs). In this paper, we propose a Deep Reinforcement Learning (DRL)-based resource orchestration scheme for task inference in vehicle-RSU-edge collaborative networks. In our approach, vehicles' inference tasks can be processed on the vehicles, RSUs, or ESs, encompassing a total of 9 possible scenarios based on the cross-RSU mobility of vehicles. The scheme jointly optimizes task processing decision-making, transmission power allocation, computational resource allocation, and transmission rate allocation. The objective is to minimize the total cost, which involves a trade-off between task processing latency, energy consumption and inference error rate across all vehicle tasks. We design a DRL algorithm that decomposes the original optimization problem into sub-problems and efficiently solves them by combining the Softmax Deep Double Deterministic Policy Gradients (SD3) algorithm with multiple numerical methods. We analyzed the complexity and convergence of the algorithm. Specifically, we demonstrated its low complexity and fast, stable convergence, which prove its effectiveness in solving the problem. And we demonstrate the superiority of our scheme by comparing it with 5 benchmark schemes across 6 different scenarios.
科研通智能强力驱动
Strongly Powered by AbleSci AI