计算机科学
强化学习
动态优先级调度
马尔可夫决策过程
单调速率调度
调度(生产过程)
公平份额计划
作业车间调度
最小空闲时间调度
流水车间调度
数学优化
最早截止时间优先安排
两级调度
人工智能
马尔可夫过程
地铁列车时刻表
数学
统计
操作系统
作者
Xinquan Wu,Xuefeng Yan,Donghai Guan,Mingqiang Wei
标识
DOI:10.1016/j.engappai.2023.107790
摘要
The dynamic job-shop scheduling problem (DJSP) is a type of scheduling tasks where rescheduling is performed when encountering the uncertainties such as the uncertain operation processing time. However, the current deep reinforcement learning (DRL) scheduling approaches are hard to train convergent scheduling policies as the problem scale increases, which is very important for rescheduling under uncertainty. In this paper, we propose a DRL scheduling method for DJSP based on the proximal policy optimization (PPO) with hybrid prioritized experience replay. The job shop scheduling problem is formulated as a sequential decision-making problem based on Markov Decision Process (MDP) where a novel state representation is designed based on the feasible solution matrix which depicts the scheduling order of a scheduling task, a set of paired priority dispatching rules (PDR) are used as the actions and a new intuitive reward function is established based on the machine idle time. Moreover, a new hybrid prioritized experience replay method for PPO is proposed to reduce the training time where samples with positive temporal-difference (TD) error are replayed. Static experiments on classic benchmark instances show that the make-span obtained by our scheduling agent has been reduced by 1.59% on average than the best known DRL methods. In addition, dynamic experiments demonstrate that the training time of the reused scheduling policy is reduced by 27% compared with the retrained policy when encountering uncertainties such as uncertain operation processing time.
科研通智能强力驱动
Strongly Powered by AbleSci AI