计算机科学
强化学习
深度学习
边缘计算
服务器
云计算
边缘设备
延迟(音频)
分布式计算
人工智能
调度(生产过程)
GSM演进的增强数据速率
计算机网络
操作系统
电信
运营管理
经济
作者
K. Kumaran,E. Sasikala
标识
DOI:10.1109/aisp57993.2023.10134928
摘要
Nowadays, due to the increase of technological development in smart devices, more computational capabilities are needed with better performance. The maximization of the cloud computing resources used in mobile networks and Internet of Things makes good results. The data processing along with delay in the communication network will be easily handled by edge computing technology. The edge server brought as much as close to the end devices to avoid the frequent cloud access. In edge servers, the processing of the computational task is time consuming, which causes latency. The inference of deep learning along with reinforcement models makes the better resource allocation and task scheduling for complex systems. In this work, deep reinforcement models like Q learning, Double Q learning, Deep Q network, Double Deep Q learning has been compared. The developed Deep Reinforcement learning model has been deployed in the edge-based system. As a result, it is inferred that Double Deep Q learning is having the ability to make better decisions in resource allocation by maximizing the gains and rewards. The simulation result shows that Double Deep Q learning performs better with decreased latency and increases the performance of the edge systems.
科研通智能强力驱动
Strongly Powered by AbleSci AI