计算机科学
强化学习
稳健性(进化)
启发式
调度(生产过程)
流水车间调度
数学优化
作业车间调度
人工智能
机器学习
地铁列车时刻表
数学
生物化学
基因
操作系统
化学
作者
Felix Grumbach,Anna Müller,Pascal Reusch,Sebastian Trojahn
标识
DOI:10.1007/s10845-022-02069-x
摘要
Abstract This proof-of-concept study provides a novel method for robust-stable scheduling in dynamic flow shops based on deep reinforcement learning (DRL) implemented with OpenAI frameworks. In realistic manufacturing environments, dynamic events endanger baseline schedules, which can require a cost intensive re-scheduling. Extensive research has been done on methods for generating proactive baseline schedules to absorb uncertainties in advance and in balancing the competing metrics of robustness and stability. Recent studies presented exact methods and heuristics based on Monte Carlo experiments (MCE), both of which are very computationally intensive. Furthermore, approaches based on surrogate measures were proposed, which do not explicitly consider uncertainties and robustness metrics. Surprisingly, DRL has not yet been scientifically investigated for generating robust-stable schedules in the proactive stage of production planning. The contribution of this article is a proposal on how DRL can be applied to manipulate operation slack times by stretching or compressing plan durations. The method is demonstrated using different flow shop instances with uncertain processing times, stochastic machine failures and uncertain repair times. Through a computational study, we found that DRL agents achieve about 98% result quality but only take about 2% of the time compared to traditional metaheuristics. This is a promising advantage for the use in real-time environments and supports the idea of improving proactive scheduling methods with machine learning based techniques.
科研通智能强力驱动
Strongly Powered by AbleSci AI