|
[1] Michael Pinedo. Scheduling. Vol. 29. Springer, 2012. [2] Jeffrey D. Ullman. “NP-complete scheduling problems”. In: Journal of Computer and System sciences 10.3 (1975), pp. 384–393. [3] Kenneth R Baker. Introduction to sequencing and scheduling. John Wiley & Sons, 1974. [4] Eugene L Lawler et al. “Sequencing and scheduling: Algorithms and complexity”. In: Handbooks in operations research and management science 4 (1993), pp. 445–522. [5] David Silver et al. “Mastering the game of Go with deep neural networks and tree search”. In: nature 529.7587 (2016), p. 484. [6] Wei Zhang and Thomas G Dietterich. “A reinforcement learning approach to job-shop scheduling”. In: IJCAI. Vol. 95. Citeseer. 1995, pp. 1114–1120. [7] M Emin Aydin and Ercan Öztemel. “Dynamic job-shop scheduling using reinforcement learning agents”. In: Robotics and Autonomous Systems 33.2-3 (2000), pp. 169–178. [8] Zhiping Peng et al. “Random task scheduling scheme based on reinforcement learning in cloud computing”. In: Cluster computing 18.4 (2015), pp. 1595–1607. [9] Matthew E Taylor and Peter Stone. “Transfer learning for reinforcement learning domains: A survey”. In: Journal of Machine Learning Research 10.Jul (2009), pp. 1633– 1685. [10] Volodymyr Mnih et al. “Playing atari with deep reinforcement learning”. In: arXiv preprint arXiv:1312.5602 (2013). [11] Manuel J Pereira Lopes and JM Valério de Carvalho. “A branch-and-price algorithm for scheduling parallel machines with sequence dependent setup times”. In: European journal of operational research 176.3 (2007), pp. 1508–1527. [12] Yves Robert and Frédéric Vivien. Introduction to scheduling. CRC Press, 2009. [13] Ethel Mokotoff. “Parallel machine scheduling problems: A survey”. In: Asia-Pacific Journal of Operational Research 18.2 (2001), p. 193. [14] John H Blackstone, Don T Phillips, and Gary L Hogg. “A state-of-the-art survey of dispatching rules for manufacturing job shop operations”. In: The International Journal of Production Research 20.1 (1982), pp. 27–45. [15] TCE Cheng and CCS Sin. “A state-of-the-art review of parallel-machine scheduling research”. In: European Journal of Operational Research 47.3 (1990), pp. 271–292. [16] Michael H Rothkopf. “Scheduling independent tasks on parallel processors”. In: Management Science 12.5 (1966), pp. 437–447. [17] Min Ji et al. “Multitasking parallel-machine scheduling with machine-dependent slack due-window assignment”. In: International Journal of Production Research 57.6 (2019), pp. 1667–1684. [18] Zhi-Long Chen and Warren B Powell. “Solving parallel machine scheduling problems by column generation”. In: INFORMS Journal on Computing 11.1 (1999), pp. 78–94. [19] Johannes Bartholomeus Gerardus Frenk and AHG Rinnooy Kan. “The rate of convergence to optimality of the LPT rule”. In: Discrete Applied Mathematics 14.2 (1986), pp. 187–197. [20] JBG Frenk and AHG Rinnooy Kan. “The asymptotic optimality of the LPT rule”. In: Mathematics of Operations Research 12.2 (1987), pp. 241–254. [21] Kai Li et al. “Parallel machine scheduling problems in green manufacturing industry”. In: Journal of Manufacturing Systems 38 (2016), pp. 98–106. [22] Daniel Kowalczyk and Roel Leus. “A branch-and-price algorithm for parallel machine scheduling using ZDDs and generic branching”. In: INFORMS Journal on Computing 30.4 (2018), pp. 768–782. [23] Javad Behnamian, Mostafa Zandieh, and SMT Fatemi Ghomi. “Parallel-machine scheduling problems with sequence-dependent setup times using an ACO, SA and VNS hybrid algorithm”. In: Expert Systems with Applications 36.6 (2009), pp. 9637–9644. [24] Jun Pei et al. “Single-machine and parallel-machine serial-batching scheduling problems with position-based learning effect and linear setup time”. In: Annals of Operations Research 272.1-2 (2019), pp. 217–241. [25] Burak Gökgür, Brahim Hnich, and Selin Özpeynirci. “Parallel machine scheduling with tool loading: a constraint programming approach”. In: International Journal of Production Research 56.16 (2018), pp. 5541–5557. [26] Jamal Shahrabi, Mohammad Amin Adibi, and Masoud Mahootchi. “A reinforcement learning approach to parameter estimation in dynamic job shop scheduling”. In: Computers & Industrial Engineering 110 (2017), pp. 75–82. [27] Wei Han, Fang Guo, and Xichao Su. “A Reinforcement Learning Method for a Hybrid Flow-Shop Scheduling Problem”. In: Algorithms 12.11 (2019), p. 222. [28] Linus Schrage. “Letter to the editor—a proof of the optimality of the shortest remaining processing time discipline”. In: Operations Research 16.3 (1968), pp. 687–690. [29] Peng Si Ow and Thomas E Morton. “The single machine early/tardy problem”. In: Management science 35.2 (1989), pp. 177–191. [30] AO Putnam et al. “Updating critical ratio and slack time priority scheduling rules”. In: Production and Inventory Management 12.4 (1971), pp. 51–72. [31] RW Conway. “Priority dispatching and work in progress inventory in a job workshop”. In: Journal of Industrial Engineering 16 (1965), pp. 123–130. [32] “Critical ratio (CR) ruleCRITICAL RATIO”. In: Encyclopedia of Production and Manufacturing Management. Ed. by P. M. Swamidass. Boston, MA: Springer US, 2000, pp. 137–137. ISBN: 978-1-4020-0612-8. DOI: 10.1007/1-4020-0612-8_197. URL: https://doi.org/10.1007/1-4020-0612-8_197. [33] Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. [34] Donald E Kirk. Optimal control theory: an introduction. Courier Corporation, 2004. [35] Christopher JCH Watkins and Peter Dayan. “Q-learning”. In: Machine learning 8.3-4 (1992), pp. 279–292. [36] Richard S Sutton, Andrew G Barto, et al. Introduction to reinforcement learning. Vol. 135. MIT press Cambridge, 1998. [37] Yann LeCun et al. “Gradient-based learning applied to document recognition”. In: Proceedings of the IEEE 86.11 (1998), pp. 2278–2324. 39 [38] David Silver et al. “Mastering the game of go without human knowledge”. In: Nature 550.7676 (2017), pp. 354–359. [39] Timothy P Lillicrap et al. “Continuous control with deep reinforcement learning”. In: arXiv preprint arXiv:1509.02971 (2015). [40] Carlos Florensa et al. “Reverse curriculum generation for reinforcement learning”. In: arXiv preprint arXiv:1707.05300 (2017).
|