计算机科学
移动边缘计算
计算卸载
强化学习
用户设备
调度(生产过程)
能源消耗
边缘计算
无线
移动设备
实时计算
任务(项目管理)
分布式计算
计算
GSM演进的增强数据速率
数学优化
算法
人工智能
计算机网络
经济
管理
操作系统
生物
电信
基站
数学
生态学
作者
Yunpeng Wang,Weiwei Fang,Yi Ding,Naixue Xiong
标识
DOI:10.1007/s11276-021-02632-z
摘要
Unmanned Aerial Vehicle (UAV) can play an important role in wireless systems as it can be deployed flexibly to help improve coverage and quality of communication. In this paper, we consider a UAV-assisted Mobile Edge Computing (MEC) system, in which a UAV equipped with computing resources can provide offloading services to nearby user equipments (UEs). The UE offloads a portion of the computing tasks to the UAV, while the remaining tasks are locally executed at this UE. Subject to constraints on discrete variables and energy consumption, we aim to minimize the maximum processing delay by jointly optimizing user scheduling, task offloading ratio, UAV flight angle and flight speed. Considering the non-convexity of this problem, the high-dimensional state space and the continuous action space, we propose a computation offloading algorithm based on Deep Deterministic Policy Gradient (DDPG) in Reinforcement Learning (RL). With this algorithm, we can obtain the optimal computation offloading policy in an uncontrollable dynamic environment. Extensive experiments have been conducted, and the results show that the proposed DDPG-based algorithm can quickly converge to the optimum. Meanwhile, our algorithm can achieve a significant improvement in processing delay as compared with baseline algorithms, e.g., Deep Q Network (DQN).
科研通智能强力驱动
Strongly Powered by AbleSci AI