计算机科学
强化学习
工作量
服务器
马尔可夫决策过程
服务质量
云计算
边缘计算
互联网
GSM演进的增强数据速率
边缘设备
计算机网络
分布式计算
人工智能
马尔可夫过程
操作系统
统计
数学
作者
Jiawei Lu,Jielin Jiang,Venki Balasubramanian,Mohammad R. Khosravi,Xiaolong Xu
标识
DOI:10.1016/j.comcom.2022.02.011
摘要
In the typical scenario of the Internet of Vehicles (IoV), the edge servers (ESs) are laid out near the road side units (RSUs) to process the collected data for a variety of IoV services in real time. Generally, because ESs are lightweight compared with cloud servers, if the ESs are not appropriately distributed, it will cause the unbalanced workload of the ESs. Thus, developing an ES plan to avoid the risk of overload and improve the quality of service (QoS) remains a challenge. To tackle it, a deep reinforcement learning-based multi-objective edge server placement strategy, named DESP, is fully explored, to promote the coverage rate, the workload balancing and reduce the average delay of finishing tasks in the IoV. In particular, the Markov Decision Process (MDP) of the ES placement problem is formulated and the deep reinforcement learning, i.e., Deep Q-Network (DQN) is applied to obtain the optimal placement scheme achieving the multiple objectives above. At last, a real vehicular data set is used for assessing the validity of DESP.
科研通智能强力驱动
Strongly Powered by AbleSci AI