计算机科学
阻塞(统计)
带宽(计算)
网络性能
过程(计算)
算法
计算机网络
分布式计算
操作系统
作者
Wen Yan,Xiaohui Li,Yuemin Ding,Jie He,Bin Cai
标识
DOI:10.1016/j.yofte.2023.103625
摘要
The continuous growth of the network communication demand puts higher requirements on the network infrastructures. The elastic optical network (EON) has great potential to support the continued demands for communication bandwidth. Efficient use of bandwidth resources of EON has been particularly important to alleviate network blocking, which depends on routing, modulation, and spectrum allocation processes (RMSA). However, the time-varying states of EON caused by the uncertainty of future demands make it a lot tougher to realize the online RMSA in real time. To solve the above problem, this paper proposes a kind of Deep Q Network (DQN) algorithm with prioritized experience replay mechanism to perform the RMSA process in real-time. The proposed algorithm includes two parts. One is the Markov Decision Process (MDP) based state transfer for online RMSA by a trained Q-network. The other is an offline DQN-based algorithm for getting a trained Q-network to help the decision-making of RMSA state transfer, where the experience priority replay mechanism and Sumtree are introduced to speed up DQN training. Simulation results show that compared with the traditional Deep Q Network algorithm, the proposed algorithm nearly doubles the Q-network training speed. And compared with the traditional sp + ff algorithm, the trained Q-network reduces the blocking rate by nearly 35 %.
科研通智能强力驱动
Strongly Powered by AbleSci AI