Supply chains are coordinated flows of materials from the suppliers to the locations where they are supposed to be consumed. As one of the major supply chain simulation methodologies, DES concerns the modeling of a system as it evolves over time by a representation in which the state variables change instantaneously at distinct points in time Law (2014). The method is commonly used to analyze complex processes that are challenging with closed-form analytical methods. DES is widely used for supply chain management analysis such as manufacturing process and logistics planning Jahangirian et al. (2010); Lu et al. (2012); Ziarnetzky et al. (2014). Simulations enable the design of the supply chain, and the evaluation of supply chain management prior to implementation of the system to perform what-if analysis Thierry et al. (2008).
A DES model is rarely run only once. Multiple simulation runs are usually required for various purposes. As input parameters, e.g., processing time of a product, are often random variables, multiple runs with different realizations of the random input variables are required in order to obtain statistically meaningful outputs. Furthermore, if a sensitivity analysis is applied on a simulation model to select input variables that have the largest impact on response variables, another layer of multiple runs are needed to vary input parameters such as different distributions of processing timesLi (1989); Montevechi et al. (2007). Optimization is another technique that can be combined with DES to define optimal input control variables, e.g., production capacity. Each iteration of an optimization requires multiple simulation runs for a set of system parameters Fu et al. (2014); Kapuscinski & Tayur (1999); Rosenblatt et al. (1993); Yan & Wang (2007).
In summary, a large amount of DES runs are often required for an analysis task. As the scale of supply chains grows large, for example, due to globalization and inter-enterprise collaborationBaldwin (2012); Simatupang & Sridharan (2002), some simulation models may take hours to complete one run. Therefore, the time to perform analysis with thousands, sometimes hundreds of thousands, of DES runs for a complex supply chain can be prohibitively long when standard MC is used.
As an approximation of DES, the full simulated time can be divided into periods of given time buckets, . Time bucket based simulation does not model the occurrence of each event, instead, it counts the number of events happening in each time bucket, at the end of which the system state is updated using the model equations. Therefore, in this approach, events can be considered to occur instantaneously at the beginning of a period Thierry et al. (2008). Note that our terminology-“time bucket” is consistent with part of the supply chain literature, e.g., Thierry et al. (2008), while can be equivalently denoted by “time interval”, “time leap”, etc. The size of the time bucket can be defined either as a fixed value or in a time-dependent fashion. When the size of a time bucket is small enough that each bucket has at most one event, then the model is equivalent to DES. The advantage of the time bucket method is that it is more scalable compared with DES when the size of a time bucket is relatively large. The disadvantage is that due to the aggregation of multiple events, some interactions between events are lost, thus the model is not as accurate as DES, and is less commonly used. The Tau-leap method Gillespie (2001); Cao et al. (2006); Moraes et al. (2014); Chatterjee et al. (2005) is essentially a stochastic time bucket method that has been widely used to accelerate the simulations of chemical reactions modeled by continuous time Markovian processes. Rather than simulating every discrete event, Tau-leap method simulates the stochastic change of the system states at discrete time points using constant propensity function to simulate the number of processes happening during a time bucket. Although the simulation results are biased due to the time buckets, significant acceleration can be achieved under acceptable tolerance. Recently, the D-leap method has been proposed to accelerate the simulations of delayed chemical reactions (Bayati et al., 2009) by introducing a queue of reactions to take account of the delays.
Our innovations are as follows. First, we extended the D-leap method to consider features of manufacturing supply chain and logistic networks in operational research. The resulting L(logistic)-leap method is able to consider production time, transportation time, limited capacity, pull system and priority production. Secondly, we used MLMC method based on time buckets to propagate the uncertainties in a supply chain, where most of the computational work is shifted from the expensive models, e.g., DES, to the cheap models defined by large time buckets. The proposed approach is able to match the model accuracy of DES while overcoming its scalability limitation with the help of MLMC. To the best of our knowledge, it is the first time, this type of leap method and multilevel Monte Carlo being used in the supply chain management, which opens door to more applications associated with operations research.
Section 2 is a literature review of the DES and leap methods. Section 3 describes the accelerated approximation of DES using the time bucket method and the detailed algorithms for the simulation of supply chain features. Section 4 introduces the L-leap method which specifically is a time bucket method for simulating logistic systems driven by stochastic processes. Section 5 presents an MLMC method in which the samples are drawn from populations simulated using different sizes of time buckets. In Section 6 we show the accuracy and gain in computational speed using extensive examples. The first example concerns push system where the production does not depend on orders. The second example is a pull system with mixed orders of the spare parts and the final products, which also considers transportation delays. The third example considers the uncertainty propagation of the push system under parametric uncertainties. The fourth example considers the uncertainty propagation of the pull system under both parametric uncertainties and those driven by stochastic processes. The quantities of interests are the final delivery time of fixed amount of orders and the number of deliveries over a specified time period. We show that the error of the predictive simulations with respect to (w.r.t.) the true solution provided by DES diminishes as we decrease the size of the time bucket. We achieve a factor of several magnitudes speed up in computing the expected quantities of interest using the MLMC method based on the time buckets and L-leap, against the standard MC sampling.
2 Literature review
2.1 Discrete event simulation in logistics and supply chains
DES is widely used in the logistics and supply chain management as a tool to simulate the change of system states over interested time period, for example, it has been used in supply chain network structures Alfieri & Brandimarte (1997); Bhaskaran (1998); Byrne & Heavey (2006), inventory management Angulo et al. (2004); Beamon & Chen (2001); Biswas & Narahari (2004); Ceroni & Nof (2002); Dong & Chen (2005b, a); Fleisch & Tellkamp (2005); Fleischmann et al. (2003) and supplier selection Ding et al. (2005, 2006); Jain & Ervin (2005), etc (see Tako & Robinson (2012) for a detailed survey on the application of DES in the context of logistics and supply chains). In DES, the system states change instantaneously at discrete time points when relevant events take place. While the definition of events is subjected to the goal of the modeling, systematic approaches can be followed to design such a simulation Law et al. (1991). The dominant type of DES is next-event time-advance where the time clock always leaps to the most imminent time among the times of occurrence of future events in an event list. The simulation complexity of a DES is therefore proportional to the number of events in the real system during a simulated period of time. Distributed computation can be used to accelerate a DES. Specifically, implementation of numerical operators, such as the random number generator and the manipulation of event list, can be parallelized. A network can be decomposed into several sub-networks whose simulations can be parallelized. Many articles have been devoted to these topics, detailed surveys can be found in Fujimoto (1990); Mustafee et al. (2014); Fujimoto et al. (2017); Taylor (2019).
2.2 Time bucket method
In time bucket method, the system clock leaps at fixed time bucket and the system state only changes instantaneously at the end of each time bucket considering all the events occurring during the corresponding time bucket. Time bucket method can be viewed as a special case of next-event time-advance DES Law (2014); Thierry et al. (2008). However, the procedure and analysis of time bucket method have been rarely elaborated in the literature of operational research.
2.3 -leap method for the approximation of DES in chemical and biochemical systems
-leap method Gillespie (2001) is a widely used time-bucket method in the simulation of discrete chemical reactions. Rather than advancing the system clock to the next time instance when a reaction process takes place (Gillespie algorithm Gillespie (1976)), -leap predict the number of reactions in a time interval using a random variable.
where represents the total number of process- happening during , is a Poisson random variable with parameter , is the rate function evaluated at time . Based on the number of happened processes, we can update the system states, e.g., the number of products. Note that if changes during the time period , the method introduces time discretization error. However, the total simulation complexity is proportional to the number of time intervals and it could be much faster than simulating every event for given numerical tolerance. Efforts have been made to enhance the efficiency and accuracy of the original version of -leap, for instance, efficient time interval selection Cao et al. (2006), postleap checking Anderson (2008) and hybrid method Moraes et al. (2014). In -leap method, the reaction products are generated instantaneously without delay after molecules collide. Its extension to delayed chemical reactions leads to -leap method Bayati et al. (2009).
2.4 -leap method for simulation of delayed chemical and biochemical systems
-leap Bayati et al. (2009) is an extension of -leap in that it considers delayed chemical reactions. It counts the number of reactions happening during a time interval using (1) and the reactants are instantaneously consumed, hence the system state is updated by
where is the system state, is the number of system states, is the consumption of by a single event of reaction . The earliest production time is time plus the given minimum delay of reaction , while the latest finishing time of units of reaction is plus the given maximum delay of reaction . During any time interval which overlaps with the span between the earliest production time and the latest finishing time, the possible accomplished reaction , which is a fraction of
, is defined by a binomial distributed random variable. Consequently, the system state is updated in a similar fashion as (2). Nevertheless, the production leads to a positive change of the number of products. This constitutes the base for our -leap method in logistic and supply chain context where lead time of a process is usually non-negligible.
2.5 Monte Carlo in supply chain management
Monte Carlo method is widely used to propagate uncertainties of random inputs to a typical quantity of interest in a supply chain Schmitt & Singh (2009); Deleris & Erhun (2005); Wong et al. (2008); Kim et al. (2011); Jung et al. (2004)
. Many variance reduction techniquesKleijnen et al. (2013); Law et al. (1991), e.g., antithetic variate, control variate, have been applied together with DES to increase the statistical efficiency of the uncertainty propagation. MLMC emerged recently as a powerful sampling method to accelerate the computation of an expectation via drawing samples from a hierarchy of models (Giles, 2008, 2015), while control variate can be viewed as the simplest form of MLMC consists two levels (Giles, 2015). In (Anderson & Higham, 2012; Moraes et al., 2016), multilevel Monte Carlo and -leap are applied to the stochastic simulation of chemical reactions to achieve better scalability.
3 Time bucket approximation of DES for supply chains
Supply chains transport materials from the suppliers to the places where they are consumed. The raw materials usually get consumed and transformed into some intermediate products. We define set of all the parts, set for all the supplies of raw materials and set for all the final products. E.g., in the supply chain of the first numerical example (see Figure 3), we have eight parts among which three are raw materials, one is the final product. Hence, , , . The actual supply chain can be modeled as discrete mass flows with limited capacities, i.e., the production rate of each process is bounded from above. Specifically, a supply chain can be defined by a set of processes, each of which can be described as follows
where for each process , is the number of consumed parts, is the number of produced parts, denotes the consumed part and denotes the produced part. We use and as the integer weights corresponding to parts and , respectively. That is, if process happens once, it consumes units of part and will produce units of part . Note that the symbols and are “local” w.r.t. process . A part may have different local symbol in different process. E.g., is locally in process one and on the other hand, it is in process three of the first example. By definition, contains all the parts in the system, hence we have , for all , and . and for all , and . We denote as the state vector recording the number of parts, where denotes the set cardinal. Note that the mapping is bijective, where is the set of the components of . Based on the definitions of and , we have as the number of the part consumed in the process, and, similarly, is the number of the part produced in the process. For clarity, in the following texts we use and to denote these quantities. At time , the process occurs at a rate which is given by
where is the maximum production rate (capacity) associated with the process, is the size of the time bucket, is the floor function, which rounds down to the nearest integer. The first equation in (4) shows that the process can achieve its maximum rate if all its materials have enough inventory in this time bucket, otherwise, the rate is reduced to the value which prevents negative values of the consumed materials during this time bucket. Equation (4) denotes a deterministic production rate, while other alternatives are possible. For example, the consumption rate in (4) can be modeled by incorporating the expected arrivals of the consumed parts d’Apice et al. (2010), i.e., when one part, e.g.,
, is out of stock, its availability in the next time bucket may be estimated by checking the scheduled productions in the preceding processes over this time bucket. If the number of scheduled productions plus the current inventory is larger than, the maximum capacity, , can still be achieved. Otherwise, the consumption rate can be adjusted to match the summation of the expected arrival of and its current inventory. However, we use (4) in our approach since it is more likely preventing the negative inventory value of .
It is also worth mentioning that if a single part can be consumed by multiple processes, we need to define a distribution policy among the processes. In this case, one way to modify equation (4) is as follows
which assumes that part is evenly consumed by all the processes requiring it.
The time bucket simulation of a supply chain process can be split into two major phases: 1) material consumption: each process consumes the necessary parts instantaneously according to its production rate - . 2) delayed production: due to the required processing time (lead time) in each process, we consider all the productions require delays after materials have been instantaneously consumed. Note that our consumption-delayed-production framework follows the modeling procedures of the D-leap method for delayed chemical reaction network simulation in Bayati et al. (2009). Importantly, in the context of logistics, we enriched the D-leap method with several salient features of supply chains: transportation, order-driven production (pull system), and priority production. We describe in details the time-bucket simulation of consumption-production in Sections 3.1 and 3.2.
The consumption of parts happening in each time bucket is instantaneously taken into account at the beginning of every time bucket. In each time bucket , the total number of triggered processes reads
The state vector is then updated by the following equation
For the sake of conciseness, we omit variable and use instead of in the remainder of this paper.
At each time point, we check if the executions of the processes should be completed or not, and estimate the quantity of completions. In the implementation, a queue structure is created to store the necessary information, i.e., the index of the delayed process-, where , is the number of process batches in the queue, the number of the delayed processes-, the earliest time of the production being completed-, the time span between the earliest and the latest times of the production being completed-.
The earliest production time and the total production period of the processes can be computed as follows
where and are the minimum and maximum lead times for each process correspondingly. The definitions are schematically shown in Figure 1.
3.2 Delayed production
Productions are expected as long as . The simulation algorithm should check if there is any scheduled production due to occur in the current time bucket, i.e., all the which satisfy
. Assuming that the productions are uniformly distributed over time, the number of completed productions are proportional to the time fractionw.r.t. the total span . Consequently, we update the associated components of the state vector-, , and respectively. The details of the computations related to delayed production are summarized in Algorithm 2, which is a deterministic version of the production algorithm in (Bayati et al., 2009).
3.3 Push system
A supply chain push system, e.g., Material Requirement Planning Ptak & Smith (1994), controls the production flow moving from the supply end to the final retailer end, with the purpose of firstly fulfilling the raw materials in the supply end, and then starting the procedure of production according to its prediction of demands.
Incorporating Algorithm 1 and Algorithm 2, we present Algorithm 3 which simulates a push system of supply chain. Note that we may need to adjust the length of the last time bucket to ensure the simulation stops at (lines 11-13 of Algorithm 3), where is the end time of the simulation.
3.3.1 Inventory management
Inventory management is usually an important part of push system. A safety stock is a popular and easy-to-implement remedy to mitigate disruptions in supply-chain operations Law (2014); Shapiro (2006) which can be caused by the temporal variations of product orders and the uncertainties in the supply. One strategy we can use to update the inventory is by adding the back order quantity when the inventory is less than the safety stock as follows
where is a raw material, is the safety stock, is a constant used as a safeguard for the stock of part . Another possible way to place the back order can be
which is more resilient towards uncertainties in the supply chain network. On the other hand, when we increase the amount of inventory, we expect increased storage costs. Finding a good balance between the safety stock , safeguard , order delay and costs, remains challenging in practice. The optimal strategy for inventory management is problem specific, and an extensive literature has been devoted to this topic Grossmann (2005); You & Grossmann (2008); Daniel & Rajendran (2005); Shapiro (2006).
Let denote the time when the next supply of part arrives. Given a constant , our inventory management can be summarized as in Algorithm 4 for each time when we update the system state.
3.4 Pull system
A pull system, e.g., the Toyota Production System Ohno (1988), for which some other names are just-in-time production and lean manufacturing, is a different policy design of manufacturing supply chains compared with a push design in that its productions and inventories managements are driven by incoming orders. In this section, we describe the time bucket algorithms for order projection before we introduce the full time bucket algorithm of pull system. The inventory management simulation should remain the same as described in section 3.3.1.
3.4.1 Projected order and pull system
Once a demand order is given, a supply chain system firstly check if sufficient inventory exists to meet the demand. If there is not enough inventory to fulfill the demand, the supply chain needs to start the procedure of production in order to match the gap. Hence, we need need to perform a back track to see if the existing inventories of all the intermediate parts can satisfy their own demands.
To guarantee that all the demands are satisfied, the projected accumulated demand , which includes the number of parts that is consumed in the intermediate processes, should be calculated by the following recursive function
where is the total order of part accumulative in time up to , where are discrete time points in the simulation, is the incoming order of part at time , is the ceiling function, which rounds up to the nearest integer. The second expression of (11) consists of the direct order of part and the demand associated with the those of its “offspring” parts - .
This recursive projection can be visualized by a process starting from the final product. For example, we have a small supply chain which involves four parts as shown in Figure 2. Assuming we have some spare part orders at time on part B and D, and each order requires 100 units. By the backward recursion (11), we can obtain the projected demands for parts A, B, C and D as , , and , respectively.
The projected value indicates the necessary quantity of part that needs to be produced to satisfy the given orders. Quantity then represents the least amount of part that should be consumed in the related processes. The numerical consumption may be larger than for during a given . In this connection, we introduce a variable to control the consumption. The value of is decided by comparing the accumulated consumption with . On the other hand, implies that part has already been consumed sufficiently, and no more consumption should happen to it. The projected order and pull strategy is summarized in Algorithm 5.
3.5 Hybrid system
A hybrid system is a combination of push and pull strategies Hodgson & Wang (1991a, b); Geraghty & Heavey (2004); Ghrayeb et al. (2009). In a hybrid system, some of the production stages are organized by push strategies due to low level of uncertainty of the demand from their following stages, the production at the other stages, e.g., final assembly, is organized by pull strategy due to a high level of demand uncertainty. The corresponding time bucket implementation would be straightforwardly combining the push and pull strategies described in previous sections on a system level.
4 Stochastic time bucket method: L-leap
In the previous sections we presented the deterministic time bucket approximation of DES, where the number of processes happening during a fixed time interval is a deterministic value, i.e., , . By introducing randomnesses into the simulation, it also allows us to have a better understanding about the potential risk in the supply-chain system. Similar to D-leap (Bayati et al., 2009), we treat both consumption and delayed production as random variables. Note that our framework can extend easily to the modeling of uncertainties from other sources, e.g., the demands and supplies.
We use Poisson distribution to model the number of processes happening inwith parameter Law et al. (1991); Bayati et al. (2009):
Note that other distributions Law et al. (1991) can also be possibly used to model the number of consumption and productions, which worth future research and comparison.
In addition, Algorithms 1 and 2 can be easily extended to their stochastic version using (12) and (13). The stochastic consumption and production can be embedded in the simulation flow of Algorithm 6 which lead to a new stochastic simulation strategy. We call it the L(logistic)-leap method, where we use a constant average production rate and boolean values associated with the inventory policies and the order projections at time to predict the productions during and . Note that compared with exact simulation of DES the approximation is used here such that we have the flexibility to accelerate the computation under prescribed numerical tolerance. Indeed, we will show that uncertainty propagation in supply chains can be dramatically accelerated without sacrificing any accuracy if we use the time bucket simulation in a coordinated way. The L-leap method we are using has a piece-wise constant rate function and its stability can be proved using the approach as described in Cao et al. (2004).
5 Uncertainty propagation using time bucket simulation and MLMC
In this section, we describe the problem of uncertainty forward propagation, the MC discretization of an expectation, and the MLMC approach to compute the expectation. MLMC was combined with -leap for uncertainty quantification in the context of stochastic chemical reactions in Anderson & Higham (2012); Moraes et al. (2016).
Forward uncertainty propagation is concerned about the estimation of the expected value of a quantity of interest(), e.g., can be the delivery time of the final products. The standard MC estimator reads
where is the vector of random parameters, is the noise which perturbs the system states dynamically, is the number of samples. The notation of sequence of random variables indexed by means that for any , there exists a finite and a finite , such that for any
, the probabilityis smaller than . Assigning a tolerance and a confidence level on the statistical error leads to
Considering the Central Limit Theorem (CLT), i.e.,, we can equivalently express (15) using the distribution function of a standard normal:
Consequently, we obtain the expected number of samples in order to control the statistical error in probability:
where is the variance of the quantity of interest,
is the inverse distribution function of the standard normal distribution,is the tolerance on the absolute error committed by the MC estimator. Then, the total computational cost of a standard MC sampler is:
where is the average cost of a single DES.
MLMC is optimized in the sense that the total computational cost is minimized for a given tolerance on the numerical error. In the hierarchy of models, high level models are more accurate and computationally more expensive than low level models. Provided that the expectation and variance of the difference between the approximated and the true solutions diminish at certain rates, as the level increases, we can construct an MLMC sampler, which can be several orders more efficient than the standard MC method. Note that standard MC method would put all its samples on the highest level to control the bias of the estimator. Let denote the corresponding level approximation of the quantity of interest . Assume that the numerical discretization error is bounded uniformly in the probability space as follows
where is the size of the time bucket on level , is the convergence rate of the numerical discretization, the notation indexed by is the deterministic version of , which means that there exists a finite and a finite , such that for any , .
The expectation in (14) can be rewritten as a telescopic sum as follows
is the MLMC estimator of , is the statistical error,
is the numerical bias. A heuristic argument on the computational advantage of using this estimator is the following: the variance ofbecomes very small as increases, hence we draw few high-level samples while most of the samples are shifted to the lower levels where the computations are fast.
Next, we optimize the computational cost of the MLMC estimator for given tolerances on the bias and the statistical error which read
where is the tolerance on the bias, is the tolerance on the statistical error. Note that we can use CLT to convert (23) to the following variance constraint:
The optimal number of samples on each level can be obtained by minimizing the total cost under the constraint (24) on the variance of the estimator:
where is the average computational cost of , is the variance of the random variable , is a Lagrangian multiplier (by an abuse of notation).
Solving the above minimization problem leads to
Consequently, the optimal total computational cost of the multilevel estimator is
It is common that the variance and cost have the asymptotic bounds: and , where and are the rates which describe the algebraic decrease/grow of the variances and computational costs, respectively. In the cases where , the total cost is dominated by . In the cases where , the total cost is dominated by . In the cases where , the total cost is . Note that in the literature, it is common to impose a total tolerance on the mean square error of the MLMC estimator and split the error budget into two parts - and () on the bias and variance (Maravelias & Grossmann, 2003; Giles, 2008, 2015). We give an explicit confidence level to the statistical error control in this study which is consistent to the literature such as Collier et al. (2015); Moraes et al. (2016); Beck et al. (2018).
In the case where , the complexity of a standard MC sampler is , where is proportional to the number of samples of a standard MC, is proportional to the computational cost of each sample on highest level . Therefore, the computational complexity of MC (17) would always be asymptotically higher than those of the MLMC, namely , and .
The and in (21) should always be computed using the same realization of the random parameters as their inputs, to assure the correlation between and . In the cases where the randomness is driven by stochastic processes, we adopt the coupling scheme proposed as the Algorithm 2 in Anderson & Higham (2012). The key idea of this algorithm was to use the additivity property of Poisson processes to tightly correlate two processes on different levels.
6 Numerical example
We present in this section four numerical examples with increasing complexities. The first example is a manufacturing material flow simulated using deterministic and stochastic time bucket methods. The second example is a pull system considering back-ordering, priority delivery, and transportation delays simulated using time bucket methods. We carry out uncertainty propagation using MLMC in the third and fourth examples. We use MATLAB to implement the time bucket algorithm and build our code of MLMC on the original version from https://people.maths.ox.ac.uk/gilesm/mlmc/.
6.1 Time bucket approximations of a simple push supply chain network
We consider a supply chain system for manufacturing industry which is schematically shown in Figure 3. It involves five processes and eight parts, and we show the consumption-production relationships in equations (25)-(29). The parts on the left hand side of the equations are instantaneously consumed when the processes get started, while the parts on the r.h.s. are produced after certain periods of delays, characterized by the production time/lead time of each process. The production rate which describes the capacity of a process is the number of parts which get processed in a time unit, e.g., one day.
A push system starts the procedure of production according to its prediction of demands. We assume the following initial conditions: , , , which prescribe the initial inventory levels of , and . For all , i.e., the intermediate and final products, we let . Firstly we assume deterministic production rates which read , , , and . We also assume the processing time is deterministic, i.e., , , and they are specifically , , , , .
Figure 4 shows the time histories of the state vector, which represents the number of each part in the system at any given time, simulated under two different values of the time bucket. Note that the time bucket approximation is able to capture the main dynamical features of the system even when a coarse time bucket size, days, is used. The monotonic decrease of stops at due to the initial inventory level of . monotonically increases after an initial period of waiting which attributes to the production delays. The dynamics of the intermediate parts - , , and are majorly determined by their consumption and production rates.
It is shown in Figure 5 that the time bucket method converges to the “ground truth” computed by DES, when reduces from days to days. The error is smaller than when the time bucket is smaller than days.
The absolute error of the 200 days’ production decreases linearly when time bucket size decreases as shown in the left picture of Figure 6. The CPU time of the time bucket approximation increases linearly as we increase the number of time buckets during the simulation time (The CPU time is an average value over repetitive runs).
Next, we use the L-leap method to approximately simulate the stochastic system where the state vector is dynamically driven by Poisson processes. Figure 7 visualizes trajectories using identical initial data.
In addition, we notice that the average trajectories shift from left to right when we reduce , for example, the is produced in around 300 days when days, while it is produced in around 270 days with days. This is due to the artificially delayed availability of its previous parts when the time bucket is coarse.
6.2 Time bucket approximations of a complex pull system
This example is a pull system dealing with mixed orders of spare parts and final products, and considering transportation. The system receives spare-part orders of , , and every days. Meanwhile, the following inventory policy is adopted to refill , and : when the number of an inventory falls below , back orders of , and are placed for , and , respectively. The delivery delays are , and days for , and , respectively. Moreover, each of them has an initial inventory of . On top of the spare-part orders, we place three orders of final products on the first day, the day and the day, while each order consists final products . Upon the receipt of orders of the final products, the parts are used with priority for the production of the final products.
Additionally, transportation occurs between any two consecutive processes. The transportation of products are modeled as additional processes characterized by transportation rates and transportation delay time (similar to the production rates and the processing time of a production process). After renumerating and augmenting the original set of processes, the new system is shown in Figure 8, where the first five processes are the original processes; the second set of five processes are the transportation processes. For Processes -, we use transportation rates and constant transportation delay time are , , , and days, respectively.
Figure 9 shows the simulated numbers of parts in the system as they evolve in time using two different lengths of time buckets, i.e., and days. The start of the delivery of the final product- has been shifted to a later date compared to that of the case without transportation. Thanks to the creation of the new processes, we are able to simulate the number of goods in the buffers right after their production, during the transportation and in the buffers before their instantaneous consumption in the following process.
Additionally, we simulate a stochastic pull system with mixed orders and transportation using the L-leap method. We present the average number and its 95% confidence interval indays for part in Figure 10. It is noteworthy that very large uncertainties exist at the points where the inventory possibly gets refilled.
6.3 Uncertainty propagation using MLMC - push system
We use MLMC to compute the expected number of delivered in days in the previous push system. We consider random parameters, i.e., - are the production rates of the processes -, , , are the initial inventories of -, are the processing time of processes -. The parameters are independently uniformly distributed as follows: