In cycle dependent production systems, such as flow line systems, machine breakdowns are a major risk that can lead to complete production standstills. To ensure a necessary level of availability of production entities, multiple maintenance scheduling strategies exist. On a higher level of abstraction, maintenance approaches can be grouped into strategies for corrective maintenance (CM) and preventive maintenance (PM). In CM activities are initialized after a fault or machine breakdown is detected, whereas in PM, activities are scheduled at regular time intervals with the aim of replacing components before a fault occurs . In order to reduce the amount of unnecessarily executed actions, condition-based maintenance (CBM) leverages information about the current condition of the production environment. Actions are only executed if the physical condition of an asset justifies the maintenance activity . Due to the implications of machine breakdowns, it is essential for business operators that maintenance scheduling strategies are easy to interpret, reliable, and trustworthy. Simple scheduling rules and heuristics are easy to understand and therefore are often used to schedule maintenance tasks. While these strategies are interpretable for humans, they are lacking in terms of flexibility and their capability to adapt to complex production environments. Since maintenance scheduling can be seen as a long-term optimization over a series of short-term decisions, it fits into the reinforcement learning (RL) setting, where an intelligent agent learns by interacting with an environment. Compared to simple maintenance scheduling rules, RL is highly flexible and able to leverage information of the production environment. At the same time it is not as simple and intuitive as scheduling rules and a comprehensive consideration of the behaviour of RL-based maintenance strategies is needed in order to use them in real-world production systems. Therefore, this work uses RL to learn condition-based maintenance scheduling strategies and evaluates their behaviour in different production settings. Our main contributions briefly are:
Implementation of a Double Deep Q-Network for CBM scheduling.
Formal problem description and evaluation of the influence of maintenance and production downtime costs in reward modeling for CBM.
Comprehensive analysis and evaluation of the learned policies for maintenance scheduling.
This paper is structured as follows: In Section II the fundamentals of deep RL as well as related work regarding RL for maintenance scheduling is presented. Section III describes the RL-based maintenance scheduling concept of this work and Section IV illustrates the numerical results. Section V provides a conclusion and outlook on future research.
In this section we first introduce the fundamentals of RL. Then, we review related work regarding the use of RL methods for maintenance scheduling.
Ii-a Reinforcement Learning
RL is a branch of machine learning where an agent learns a sequential decision problem by interacting with an environment in order to maximize a numeric reward. The decision problem can be formalized as a Markov Decision Process (MDP) with tuple, where is the set of states of the environment, denotes to the set of actions and
the transition probability distribution, which specifies for every state and action
the probability of the next state, where refers to a discrete time index. For every the agent receives a reward , given by . Through the sequential interaction of the agent in the MDP, a series of subsequent tuples form a trajectory . The return of a trajectory is the discounted sum of rewards received by the agent during the interaction, where is a discount factor that scales the importance between intermediate and future rewards. The goal of RL is to learn a policy , which maximizes the expected return over all trajectories . If the transition probability function is known, the optimal policy can be obtained by dynamic programming . Since the prerequisite of a known is often not satisfied, RL allows learning a policy without knowing . Often used RL methods can be categorized into two groups: value-based and policy-based methods. Value-based methods aim to learn an approximation of the Q-function, which maps every state-action pair
to the expected future reward. The policy is then derived by acting greedily with respect to the estimated Q-values, i.e.,
On the other hand, policy-based methods try to directly learn the policy without formulating a Q-function. Therefore, a policy is described by a parametric function and optimized using optimization methods such as the gradient descent algorithm. Modelling of and is an essential task in RL. If the state-action space is sufficiently small, tabular methods can be used to store a value for every state-action pair . As the required size of the table grows rapidly with an increase of and
, their use is restricted to problems with limited complexity. In more complex settings it is common to use parametric models with parameterswith for representing and
. Over the last years a lot of research has been put into the usage of deep neural networks (DNNs) as function approximators forand [5, 6]. DNNs enable RL to scale to problems with high-dimensional state and action spaces and produced remarkable success stories. Work conducted by  demonstrated that a variant of Q-Learning, the Deep Q-network (DQN), could learn to play a collection of Atari games by using the screen images as input. In continuous environments policy-based algorithms with DNNs, such as Proximal Policy Optimization (PPO), have shown remarkable results in domains like robotics and navigation .
Ii-B RL in Maintenance Scheduling
The interest of the manufacturing research community in the application of RL to solve complex decision problems in the area of production has increased rapidly in recent years. In the area of maintenance planning, [9, 10, 11, 12, 13] are using tabular Q-learning to learn maintenance policies for single production machines or two machine systems with one intermediate buffer. Whereas  and  are only considering two actions (maintenance, no maintenance),  and  differentiate between corrective and preventive maintenance.  is using a tabular Q-learning algorithm to learn a preventive maintenance strategy for a small flow line system consisting of three machines with two intermediate buffers.
Since tabular methods suffer from the curse of dimensionality, their use is impractical for production settings aiming to address real-world applications. Tabular methods cannot be applied to more complex productions systems. This work therefore addresses the use of function approximators to learn condition-oriented maintenance policies for flow line systems, which are scalable to problems in high-dimensional state and action spaces. Since another major challenge in real-world RL applications is the explainability of policies, this work focuses on the analysis of the learned policies and their applicability in different production settings. This work makes a contribution towards developing effective, robust and trustworthy policies in complex production settings with high-dimensional state and action spaces.
RL with function approximation in the context of a job shop is also considered by . The authors are using PPO for maintenance planning of parallel, single machines. Other work by  is considering inter machine dependencies in a flow line system using value-based Double Deep Q-Network (DDQN) algorithm. The production system is modeled as a dynamic system in a state-space representation, described by . Instead of a state-space representation, our work uses a discrete event simulation for modeling the production environment, which is more flexible in modeling complex systems and taking into account stochastic elements.
Iii RL-Agent Formulation
In this section the methodology for using RL for condition-oriented maintenance scheduling in flow line systems is described.
Iii-a Production System
This work considers a flow line system consisting of different machines with intermediate buffers. Each machine is described by a unique process time , degradation rate and a buffer size of the upstream buffer. The production system is modelled as a discrete event simulation with simulation time . When a machine is operational, it takes a part from the upstream buffer and starts processing it. While processing, the machines can degrade. The degradation process is formulated as a discrete Markov process, shown in Fig. 1. Each machine has different states , representing the condition of the machine. At the beginning of the simulation, every machine starts with , indicating the best possible condition. The transition probability with which a machine changes from its current condition state to is given by the degradation rate . The degradation process is triggered at each simulation time step in which the machine is in operation. If a machine reaches state it breaks down. The Markov process is used within the simulation environment to obtain subsequent states and is not explicitly used as part of a dynamics model of the production system to determine the optimal strategy via dynamic programming. Since dynamic programming requires a tabular representation, it is not applicable for larger and . Investigating model-based RL to use known dynamics models for planning purposes is part of future research. Before reaching the breakdown state, machines can receive CBM. If the machine is already in state , only CM can be performed to get the machine working again. After processing a part by a machine, the part is put into the downstream buffer, where it remains until the downstream machine can process it. For modelling the production system the discrete event simulation package SimPy is used.
The goal of the RL agent is to schedule maintenance activities for the previous described production system in order to maximize a production specific objective function. Since the availability of maintenance resources is constrained in practice, we do not allow parallel execution of maintenance activities. The term maintenance resource is used to indicate the availability of both, a skilled maintenance employee and all necessary technical equipment such as spare parts to successfully carry out the job. To control the interaction of the agent with the production environment, so-called decision points are defined. If a decision point occurs, the simulation stops and does not start again until the agent has performed an action. The simulation then continues to run until the next decision point is reached. A decision point occurs if two conditions are fulfilled: (i) a maintenance resource is available and (ii) the condition state of a machine is above a critical state . The critical state of a machine is introduced to benchmark the RL policies against a FIFO-policy (First in First Out), where the machines are maintained in the sequence in which they request maintenance. The value for is defined empirically. For training the RL policies, the threshold state is set to to allow the greatest possible scope for decision-making. Therefore, only condition (i) is relevant in the RL setting. At a decision point, the agent can either perform a maintenance action on one of the machines or choose the idle action, where nothing is done. For machines the size of the action space is thereby given by . The state representation of the MDP thus consists of the value of the current condition for every machine and the buffer levels for all machines. With machine specific buffer sizes the size of is given by .
For the implementation of the agent, a DDQN  is used. A DDQN is a variant of a DQN with a target network and experience replay  but slightly differs in the way actions are evaluated and selected. Compared to a DQN, the values of the target network are only used for the evaluation of the current greedy action. For the policy, the values obtained by the DQN network are instead used to select the action. This procedure reduces the frequent problem of overestimating the Q-values due to estimation errors . The target Q-value for the update procedure is given by
for the DDQN. DQNs and its variations such as the DDQN have shown great achievements in various environments with discrete state and action spaces, such as Atari games  and various applications in the production domain . Since the formulated problem in this work is modelled as a discrete environment with discrete state and action space, a DDQN is used for the agent implementation.
Iii-C Reward Design
The goal of maintenance scheduling is to ensure a high level of machine availability with using as little maintenance related costs as possible. Since ensuring a high level of availability of the production system comes with the cost of regularly performing maintenance activities, it often conflicts with the short-term operational goals of high utilization in the shop floor. These considerations must be taken into account when designing the reward for the RL agent. Many different reward functions are possible based on the operational focus. For the purpose of this paper, two different reward functions are defined and compared. Reward
only focuses on the output quantity of the production system. After every action the intermediate reward is given by the difference between the produced parts at the simulation time of the current decision point and the following one . In contrast to , reward function is more sophisticated and also considers the maintenance costs besides the output of the production system.
is based on three different scenarios (scenario A-C) and comprises three different cost terms: the CM costs , the CBM costs and a cost factor for the production loss , which is scaled by the simulation duration between the current decision point and the next decision point . The scaling is done to prevent disproportionate penalization for maintenance actions due to the production downtime costs incurred in each simulation step. Without this scaling, there is a risk that the agent will wait only to avoid being penalized. is a penalty factor. Scenario A and B occur if the agent chooses the idle action and does not perform maintenance. Since the idle action is not favorable if machines are broken, the agent gets a negative reward of if at least one machine is in the breakdown state (scenario B) but does not get punished if none of the machines are broken down at the time the next decision point is reached (scenario A). Scenario C covers the case when the agent schedules either CBM or CM actions. The reward is calculated according to  as a cost function, considering cost factors , , and . The values for the cost factors and penalty factor are chosen iteratively by comprehensive experiments and set to , , and in the following experiment.
In order to show the applicability of the methodology to a wider range of flow line systems, the production system is implemented in two different configurations with machines, shown in Fig. 2
. In configuration I the machines are characterized by high variance in their process times. Also the degradation rates and buffer levels differ significantly between the machines. In contrast to configuration I, a classical, synchronized flow line system with low variation in , and is considered in configuration II. The values for , and are chosen in such a way that they fit to the duration of the maintenance activities and degradation occurs at moderate simulation times. The duration of the maintenance actions are adapted from  and set to simulation time steps for CM and for CBM. The duration differs because preventive maintenance usually can be performed faster while CM takes longer since it is more likely that the reason for breakdown is unknown or necessary equipment may not be in stock. The duration of the idle action is one simulation time step. The breakdown state is set to . The threshold for the critical state for the FIFO-policy is determined empirically by performing trials with all possible values of the threshold . The threshold that leads to the highest production rate or lowest maintenance cost is then used for the FIFO-policy. For configuration I, the best performing threshold is and for configuration II.
In order to train the DDQN algorithm, 3,000 episodes are performed. In every episode the simulation time is set to . The training results for configuration I are shown in Fig. 3, where the total number of produced parts and the generated maintenance costs per simulation run over the training episodes are smoothed over 100 episodes.
For the DQN network and the target network
fully connected DNNs with two hidden layers and relu activation are used. The hyperparameters (HP) of the DDQNs are selected using Bayesian optimization with a Gaussian process as surrogate for the objective function. As objective, the maximum total reward smoothed over 100 episodes is optimized. The optimization is performed for both reward functions and therefore two DDQNs with different HP are used for comparison. For
the number of neurons of the hidden layers is set to 17 and 11 and forto 14 and 18. The size of the replay memory is set to 100,000 for both DDQNs with a batch size of 151 in the setting and 137 in the setting. During training, the parameters are updated every 200 episodes when training and every 98 episodes when training . is set to 0.870 for and to 0.993 for . As optimization algorithm, adam is used to minimize the mean square error loss with the configuration described in . The initial learning rate is set to in and in . To ensure exploration during the training phase, the -greedy policy is used. The value of describes the probability with which the agent executes a random action instead of acting greedily with respect to . At the beginning of the training the agent starts with and then decays by a factor of every episode in and in until a minimum of is reached. During training with , the performance drops after around 1,300 episodes, see Fig. 3(a). This phenomenon is known as catastrophic forgetting. To avoid negative implications by catastrophic forgetting, the model parameters are constantly saved and only updated if the performance increases. In the training of , catastrophic forgetting does not occur. The reward constantly increases and finally converges after around 1,500 episodes. It is noticeable that the number of decision points or performed actions by the agent correlates with the performance of the agent. This is due to the increased selection of the idle action when the agent learns to wait for the optimal time for performing maintenance actions. The training of the polices for configuration II is carried out in the same manner using the same HP as for the DDQN trained for configuration I.
To compare the learned policies, they are applied for maintenance scheduling in both configurations over 100 different simulation episodes. As a benchmark, the FIFO-policy and a random policy are additionally applied. Fig. 4 shows for every episode in (a), (b) the total number of produced parts and in (c), (d) total maintenance costs for configuration I and configuration II. The average performance over all 100 episodes and the resulting production rates are shown in Table I. Assuming ideal machine conditions without degradation, the maximum output quantity of the system configurations are calculated by . In configuration I the longest machine process time is and therefore , while in configuration II and . For all three metrics the policy achieves the best results in both configurations, see Table I. Policy shows two main disadvantages. First, while it acts sufficiently well in configuration I, it acts poorly in the synchronized flow line configuration II. Second, the associated maintenance costs are the highest of all four policies. This behavior is due to the simple implementation of the reward function, which does not consider costs.
|Production rate||#parts||Maintenance costs|
Iv-B Policy Evaluation
A major challenge for the use of RL in real-world applications is the desire of system operators for explainable policies and actions . Maintenance scheduling is nowadays performed by human operators who are using experiential knowledge and easy to understand heuristics for their decision-making. To use RL systems for maintenance scheduling, it is essential that human operators are able to understand the behavior of an autonomously learned policy. This is especially relevant if the system might find an alternative or unexpected approach to solve the scheduling task. To evaluate the behavior of the learned policies, they are evaluated in two dimensions: number of maintenance actions that are performed and their timing.
Iv-B1 Number of Maintenance Actions
Technical components and production machines vary in their useful life and vulnerability for breakdowns. For distributing limited maintenance resources this has to be taken into account by maintenance scheduling policies. In configuration I the machines have different degradation rates, whereas in configuration II all machines degrade with a rate of . Fig. 5 shows that for the policies , and FIFO the average number of performed CBM actions per simulation run correlates with the degradation rate of the individual machines. The number of CBM actions with policy is significantly higher as for and the FIFO-policy. Since the associated costs are not implicitly considered in , this behavior is expected. On the other hand policy encounters maintenance associated costs directly and does not show this behavior. In conclusion, policy achieves the highest production rate with generating the lowest costs at the same time, see Table I. Moreover, has the lowest average number of unwanted CM actions, see Table II.
Iv-B2 Timing of Maintenance Actions
The timing of the performed maintenance action is another dimension for evaluating the behavior of the learned maintenance scheduling policies. For preventive actions, the machine condition at which the CBM action is executed is crucial. In Fig. 6 the average machine condition at the time a CBM action is executed is displayed for configuration I and II. Policy schedules CBM actions at an average condition state of 6.1 to 7.7 whereas the average condition of the benchmark FIFO-policy is between 5.7 and 6.5. Therefore, policy waits longer until CBM actions are scheduled and makes the best use of the remaining useful life (RUL) of the single machines in both configurations. Policy in contrast schedules CBM actions at a condition state between 2.5 and 7.6 and does not use the RUL sufficiently. This phenomenon can be attributed to the fact that Policy never chooses the idle action and performs CBM actions instead. While the Policy performs on average 176 actions (46.1 CBM, 2.2 CM, 127.7 idle) in configuration I, Policy only performs up to 73 actions (70.6 CBM, 2.4 CM) and never chooses any idle action, see Table II.
Another difference between the policies is the timing of performed corrective maintenance actions, which have to be executed when a machine breaks down. In Fig. 7 the performed CM actions for a representative episode are displayed over the simulation steps. Policy only uses one CM in configuration I and two CM in configuration II and therefore shows the lowest number of corrective actions. For the FIFO-policy, a maintenance holdup can be observed at the end of the simulation run. With the FIFO-policy, CM actions increasingly arise in both production configurations towards the end of the simulation. This behavior is not observed for policy . In Fig. 8 the machine conditions and performed actions for the last 150 simulation time steps of configuration II in Fig. 7, indicated by the red box, are shown. In order to avoid maintenance holdup situations, (Fig. 8(a)) prioritizes necessary CBM actions. At breaks down. To prevent a series of CM actions, first maintains , , , and , which are also in higher condition states and only executes the CM action for after all other machines are in non-critical states. Since policy is not competitive it is not considered in the detailed analysis.
In this paper, reinforcement learning is used to learn condition-oriented maintenance scheduling strategies for a flow line system, which are evaluated in a synchronous and asynchronous configuration production system. The best-learned policy achieves better results than a benchmark FIFO-policy in both configurations. Modelling of the reward is crucial for both, getting the intended behavior and for understanding the learned policies. The evaluation of the policies shows that both RL-based policies do not show anomalous or odd behavior and the actions can be reconstructed and understood by human operators. Further research will focus on methods to evaluate learned maintenance policies in a more general setting and on leveraging prior knowledge as well as model-based RL to apply maintenance scheduling for more complex production settings with multi-variant products towards the goal of using RL for maintenance scheduling in real-world applications.
This work was supported by the Baden-Wuerttemberg Ministry for Economic Affairs, Labour and Housing (Project »KI-Fortschrittszentrum LERNENDE SYSTEME«).
The implementation of the simulation model and the performed experiments are available at: https://github.com/ral94/rlcbm
-  M. Huber, “Predictive maintenance,” in Data Science, ser. Edition TDWI, U. Haneke, S. Trahasch, M. Zimmer, and C. Felden, Eds. Heidelberg: dpunkt, 2021, pp. 255–244.
-  A. K. Jardine, D. Lin, and D. Banjevic, “A review on machinery diagnostics and prognostics implementing condition-based maintenance,” Mechanical Systems and Signal Processing, vol. 20, no. 7, pp. 1483–1510, 2006.
-  R. S. Sutton and A. Barto, Reinforcement learning: An introduction, second edition ed., ser. Adaptive computation and machine learning. Cambridge, MA and London: The MIT Press, 2018.
-  D. P. Bertsekas, Dynamic programming and optimal control, fourth ed. ed., ser. Athena scientific optimization and computation series. Belmont, Mass.: Athena Scientific, 2016, vol. 1.
-  K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26–38, 2017.
A. Lazaridis, A. Fachantidis, and I. Vlahavas, “Deep reinforcement learning: A
Journal of Artificial Intelligence Research, vol. 69, pp. 1421–1471, 2020.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller, “Playing atari with deep reinforcement learning.” [Online]. Available: http://arxiv.org/pdf/1312.5602v1
-  J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms.” [Online]. Available: http://arxiv.org/pdf/1707.06347v2
-  X. Wang, H. Wang, C. Qi, and A. I. Sivakumar, “Reinforcement learning based predictive maintenance for a machine with multiple deteriorating yield levels,” in Journal of Computational Information Systems, 2014, pp. 1553–9105.
-  X. Wang, H. Wang, and C. Qi, “Multi-agent reinforcement learning based maintenance policy for a resource constrained flow line system,” Journal of Intelligent Manufacturing, vol. 27, no. 2, pp. 325–333, 2016.
-  W. Zheng, Y. Lei, and Q. Chang, “Reinforcement learning based real-time control policy for two-machine-one-buffer production system,” in Proceedings of the ASME 2017 12th International Manufacturing Science and Engineering Conference, ser. Volume 3: Manufacturing Equipment and Systems. American Society of Mechanical Engineers, 2017.
-  Z. Ling, X. Wang, and F. Qu, “Reinforcement learning-based maintenance scheduling for resource constrained flow line system,” in 2018 IEEE 4th International Conference on Control Science and Systems Engineering (ICCSSE). IEEE, 2018, pp. 364–369.
-  M. Knowles, D. Baglee, and S. Wermter, “Reinforcement learning for scheduling of maintenance,” in Research and Development in Intelligent Systems XXVII, M. Bramer, M. Petridis, and A. Hopgood, Eds. London: Springer London, 2011, pp. 409–422.
-  J. Huang, Q. Chang, and N. Chakraborty, “Machine preventive replacement policy for serial production lines based on reinforcement learning,” in 2019 IEEE 15th International Conference on Automation Science and Engineering (CASE). IEEE, 2019, pp. 523–528.
-  G. Dulac-Arnold, D. Mankowitz, and T. Hester, “Challenges of real-world reinforcement learning,” in Proceedings of the 36th International Conference on Machine Learning (ICML), ser. Proceedings of Machine Learning Research, PMLR, Ed., Long Beach, California USA, 2019.
-  A. Kuhnle, J. Jakubik, and G. Lanza, “Reinforcement learning for opportunistic maintenance optimization,” Production Engineering, vol. 13, no. 1, pp. 33–41, 2019.
-  J. Huang, Q. Chang, and J. Arinez, “Deep reinforcement learning based preventive maintenance policy for serial production lines,” Expert Systems with Applications, vol. 160, p. 113701, 2020.
-  J. Zou, Q. Chang, Y. Lei, and J. Arinez, “Production system performance identification using sensor data,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 2, pp. 255–264, 2018.
-  H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double q-learning,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), AAAI Press, Ed., Phoenix, Arizona, 2016, pp. 2094–2100.
-  V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
-  B. Waschneck, A. Reichstaller, L. Belzner, T. Altenmüller, T. Bauernhansl, A. Knapp, and A. Kyek, “Optimization of global production scheduling with deep reinforcement learning,” Procedia CIRP, vol. 72, pp. 1264–1269, 2018.
-  D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015.