强化学习
计算机科学
杠杆(统计)
服务器
移动边缘计算
任务(项目管理)
趋同(经济学)
纳什均衡
人工智能
分布式计算
数学优化
计算机网络
数学
经济增长
经济
管理
作者
Dian Shi,Hao Gao,Li Wang,Miao Pan,Zhu Han,H. Vincent Poor
出处
期刊:IEEE Internet of Things Journal
[Institute of Electrical and Electronics Engineers]
日期:2020-10-01
卷期号:7 (10): 9330-9340
被引量:31
标识
DOI:10.1109/jiot.2020.2983741
摘要
Cooperative multiaccess edge computing (MEC) is a promising paradigm for the next-generation mobile networks. However, when the number of users explodes, the computational complexity of the existing optimization or learning-based task placement approaches in the cooperative MEC can increase significantly, which leads to intolerable MEC decision-making delay. In this article, we propose a mean field game (MFG) guided deep reinforcement learning (DRL) approach for the task placement in the cooperative MEC, which can help servers make timely task placement decisions, and significantly reduce average service delay. Instead of applying MFG or DRL separately, we jointly leverage MFG and DRL for task placement, and let the equilibrium of MFG guide the learning directions of DRL. We also ensure that the MFG and DRL approaches are consistent with the same goal. Specifically, we novelly define a mean field guided Q -value (MFG-Q), which is an estimation of the Q -value with the Nash equilibrium gained by MFG. We evaluate the proposed method's performance using real-world user distribution. Through extensive simulations, we show that the proposed scheme is effective in making timely decisions and reducing the average service delay. Besides, the convergence rates of our proposed method outperform the pure DR-based approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI