计算机科学
马尔可夫决策过程
无线传感器网络
智能电网
分布式计算
移动边缘计算
高效能源利用
强化学习
能源消耗
边缘计算
Lyapunov优化
调度(生产过程)
实时计算
计算机网络
马尔可夫过程
服务器
嵌入式系统
物联网
数学优化
人工智能
工程类
Lyapunov重新设计
李雅普诺夫指数
统计
数学
混乱的
电气工程
作者
Ti Guan,Yushun Yao,Chao Yuan,Fuhao Liu,Yirong Liu,Rentao Gu
标识
DOI:10.1109/iccsi58851.2023.10303951
摘要
With the deep and close integration of massive renewable and controllable terminals into the next generation smart power grid, the amount of data and various applications with different quality requirements will increase greatly, which will be tough challenges for the power sensor network to provide real-time data collecting and processing, global monitoring, and quick response, etc. Meanwhile, due to the limitation in energy supplement and computing capacity, it is a second issue for the sensor network how to work in an energy-efficient way. In this paper, our work sets out to study a reinforcement learning (RL) based computing offloading policy in the mobile edge computing (MEC) assisted power sensor network. By jointly optimizing task scheduling and resource allocation, we propose a new scheme to optimize data latency and energy efficiency. The optimization problem is modeled as a Markov decision process (MDP), and Dueling Double Deep Q Network (D3QN) algorithm is employed to solve this MDP problem. Simulation results demonstrate the effectiveness of the proposed algorithm and its outperformance in reduction of data latency and energy consumption as well as convergence and stability, compared with other benchmarks.
科研通智能强力驱动
Strongly Powered by AbleSci AI