强化学习
计算机科学
动态优先级调度
调度(生产过程)
两级调度
公平份额计划
流水车间调度
单调速率调度
分布式计算
作业车间调度
固定优先级先发制人调度
人工智能
工业工程
数学优化
计算机网络
工程类
数学
布线(电子设计自动化)
服务质量
作者
Renke Liu,Rajesh Piplani,Carlos Toro
标识
DOI:10.1016/j.cor.2023.106294
摘要
Manufacturing industry is experiencing a revolution in the creation and utilization of data, the abundance of industrial data creates a need for data-driven techniques to implement real-time production scheduling. However, existing dynamic scheduling techniques have been mainly developed to solve problems of invariable size, and are incapable of addressing the increasing volatility and complexity of practical production scheduling problems. To facilitate near real-time decision-making on the shop floor, we propose a deep multi-agent reinforcement learning-based approach to solve the dynamic job shop scheduling problem. Double deep Q-network algorithm, attached to decentralized scheduling agents, is used to learn the relationships between production information and scheduling objectives, and to make near real-time scheduling decisions. Proposed framework utilizes centralized training and decentralized execution scheme and parameter-sharing technique to tackle the non-stationary problem in the multi-agent reinforcement learning task. Several enhancements are also developed, including the novel state and action representation that can handle size-agnostic dynamic scheduling problems, a chronological joint-action framework to alleviate the credit-assignment difficulty, and knowledge-based reward-shaping techniques to encourage cooperation. Simulation study shows that the proposed architecture significantly improves the learning effectiveness, and delivers superior performance compared to existing scheduling strategies and state-of-the-art deep reinforcement learning-based dynamic scheduling approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI