计算机科学
强化学习
两级调度
固定优先级先发制人调度
分布式计算
动态优先级调度
公平份额计划
单调速率调度
调度(生产过程)
循环调度
服务器
人工智能
计算机网络
数学优化
服务质量
数学
作者
Yuhao Chen,Yunzhe Wang,Zhe Zhang,Qiming Fu,Huixue Wang,You Lu
标识
DOI:10.1109/cbd58033.2022.00062
摘要
With the development of information technology, intelligent devices and applications in the intelligent building environment have increasingly high requirements such as arithmetic power and time delay. The traditional cloud-based computing framework cannot meet these requirements. Using cloud-side collaborative scheduling can reasonably solve this problem. Task scheduling is the core problem of cloud-side collaboration, in recent years, there are many scheduling methods, while deep learning and reinforcement learning have attracted the attention of researchers because of their advantages of no human intervention and self-learning. However, they still have disadvantages such as slow convergence speed and low scheduling success rate. Therefore, this paper proposes an improved task scheduling algorithm based on deep reinforcement learning. We first establish a task scheduling algorithm framework based on deep reinforcement learning and then improve it by combining the Double Deep Q-Network (DDQN) method with the empirical replay method to improve the scheduling success rate and accelerate the convergence speed. We use standard dataset to validate this algorithm, and the experimental results show that this method has advantages over traditional task scheduling algorithms in terms of improving the scheduling success rate, accelerating the convergence speed, and reducing the overhead of edge servers.
科研通智能强力驱动
Strongly Powered by AbleSci AI