强化学习
计算机科学
调度(生产过程)
作业车间调度
工作车间
水准点(测量)
缩小
人工智能
数学优化
工业工程
流水车间调度
工程类
地铁列车时刻表
数学
操作系统
程序设计语言
地理
大地测量学
作者
Pierre Tassel,Martin Gebser,Konstantin Schekotihin
出处
期刊:Cornell University - arXiv
日期:2021-04-08
被引量:44
标识
DOI:10.48550/arxiv.2104.03760
摘要
Scheduling is a fundamental task occurring in various automated systems applications, e.g., optimal schedules for machines on a job shop allow for a reduction of production costs and waste. Nevertheless, finding such schedules is often intractable and cannot be achieved by Combinatorial Optimization Problem (COP) methods within a given time limit. Recent advances of Deep Reinforcement Learning (DRL) in learning complex behavior enable new COP application possibilities. This paper presents an efficient DRL environment for Job-Shop Scheduling -- an important problem in the field. Furthermore, we design a meaningful and compact state representation as well as a novel, simple dense reward function, closely related to the sparse make-span minimization criteria used by COP methods. We demonstrate that our approach significantly outperforms existing DRL methods on classic benchmark instances, coming close to state-of-the-art COP approaches.
科研通智能强力驱动
Strongly Powered by AbleSci AI