计算机科学
强化学习
资源配置
分布式计算
计算卸载
任务(项目管理)
服务质量
资源管理(计算)
计算
计算机网络
边缘计算
人工智能
算法
嵌入式系统
物联网
工程类
系统工程
作者
Bishmita Hazarika,Keshav Singh,Sudip Biswas,Chih-Peng Li
标识
DOI:10.1109/tii.2022.3168292
摘要
Due to the dynamic nature of a vehicular fog computing environment, efficient real-time resource allocation in an Internet of Vehicles (IoV) network without affecting the quality of service of any of the onboard vehicles can be challenging. This article proposes a priority-sensitive task offloading and resource allocation scheme in an IoV network, where vehicles periodically exchange beacon messages to inquire about available services and other important information necessary for making the offloading decisions. In the proposed methodology, the vehicles are stimulated to share their idle computation resources with the task vehicles, whereby a deep reinforcement learning algorithm based on soft actor–critic is designed to classify the tasks based on priority and computation size of each task for optimally allocating the power. Furthermore, we also design deep deterministic policy gradient (DDPG) and twin delayed DDPG (TD3) algorithms for the considered framework. In particular, the algorithms work toward achieving the optimal policy for task offloading by maximizing the mean utility of the considered network. Extensive numerical results under different network conditions, along with comparison among the three algorithms, are presented to validate the feasibility of distributed reinforcement learning for task offloading in future IoV networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI