The Port of Newcastle features three coal export terminals coordinated by the Hunter Valley Coal Chain Coordinator (HVCCC): the Kooragang Coal Terminal (KCT) and the Carrington Coal Terminal (CCT), operated by the Port Waratah Coal Services (PWCS), and the Newcastle Coal Infrastructure Group (NCIG) Coal Terminal (NCT). Together, these three terminals are responsible for the largest coal exporting operation worldwide by tonnage, with a throughput of over 160 million tonnes in 2014. Thirty five coal mines, as far as 380 kilometers from the harbour, are connected to the terminals by a rail transportation system employing more than 40 coal trains and feeding over 1,600 coal vessels per year.
The terminals share, on their inbound side, a rail network that connects them to the mines’ load points, and, on their outbound side, a channel that connects them to the Pacific Ocean. Most of the mines in the Hunter Valley are open pits, where coal is mined and stored either at a railway siding located at the mine or at a coal loading facility that can be shared. It is then transported to one of the terminals at the Port of Newcastle, almost exclusively by rail, dumped at their dump stations and then stacked on a pad to form stockpiles. Coal extracted from various mines, with different characteristics, is mixed into blended stockpiles to meet particular customers’ specifications. Once a ship berths at a terminal, the appropriate stockpiles are reclaimed and loaded onto it. When fully loaded, the vessel may depart to its destination.Figure 1 illustrates the system under consideration.
The channel of the Port of Newcastle is quite narrow and shallow. For that reason, channel traffic must follow strict rules and procedures to avoid vessel clashes and damage to the ships’ hull due to contact with the bottom of the channel, specially for bigger vessels. That, along with limited availability of outbound resources (i.e., number of berths and ship loaders) significantly limits outbound movements at the terminals.
KCT and CCT operate as Cargo Assembly (CA) coal loading terminals. That means that they work in a “pull-based” manner, where the coal blends are assembled and stockpiled based on the demands of the arriving ships. Ideally, for this operation mode, the assembly of the stockpiles for a vessel completes at the time the vessel arrives at a berth (i.e., just-in-time assembly) and the reclaiming of the stockpiles commences immediately. Unfortunately, this does not always happen due to the limited capacities of the resources in the system, such as stockyard space, availability of stackers and reclaimers, in/outbound capacities and channel availability. NCT can operate in a “push-based” manner, where coal is kept pre-blended for longer periods in dedicated stockpiles, owned by specific customers and kept at fixed locations on the NCT stock pads, and is reclaimed and loaded when demand appears. The majority of NCT customers, i.e., mining companies, however, operate in CA mode in their contracted space.
Regarding previous works focused on the HVCCC setting, Savelsbergh and Smith (2014) consider Stockpile Location and Reclaimer Scheduling (SLARS) operations at KCT, accounting for pad assignment and placement, and reclaimer assignment with clash avoidance. Boland et al (2011, 2012)
also attempt to solve SLARS combining construction and mixed integer programming (MIP) based heuristics. These works, however, do not account for the shared resources consumed by coal passing through CCT and NCT, i.e., railing or channel usage.Thomas et al (2013), on the other hand, attempt to control the integrated system, with a single terminal and shared inbound and outbound resources, using a distributed algorithm based on Lagrangian relaxation. Their work models the operations at the terminal in a greatly simplified way and does not consider channel traffic while including a more sophisticated model of the rail operations. Belov et al (2014) also tackles the integrated system, including in-terminal operations, coal arrival scheduling and channel traffic rules, using constraint programming. Since Thomas et al (2013) and Belov et al (2014) use time-indexed models, their approaches require a sufficiently high granularity to be of practical interest. Since the time slots are typically smaller than one hour, and the planning horizons under consideration are typically of the order of weeks, such methods tend to be very computationally demanding. Also, because they rely on external (commercial) optimisation solvers, their performance is dependent on the chosen (external) solver, and their use expensive.
This work describes a method that simultaneously schedules coal arrivals at the dump stations, determines build and load periods, and schedules arrival and departure times of the vessels obeying simple channel traffic rules. For KCT, which is the key terminal of the system and responsible for handling two thirds of the volume exported, our method also decides stockpile locations and schedules stockpile reclaiming (avoiding reclaimer clashes). The objective is to maximise the system’s throughput without causing unacceptable vessel delays, which is one of the main challenges faced by the HVCCC. Other interesting and related problems are described in Boland and Savelsbergh (2012).
The centre piece of our method is an enumerative algorithm to solve the SLARS subproblems for each terminal, which is based on the work of Savelsbergh and Smith (2014). It extends their algorithm by introducing rail network and channel considerations and accounting for CCT and NCT as shared resources consumers. A Parallel Genetic Algorithm is then used to introduce solution diversity and improve solution quality. Time is considered in a continuous fashion, and practical planning horizon sizes are handled efficiently. In addition, our approach does not rely on any external solvers, making it cost effective. The proposed algorithms generate very competitive solutions that match or improve solutions produced by state-of-the-art solvers, vastly outperforming them in terms of computational resources (i.e., both in memory usage and running time).
The remainder of this paper is organised as follows: Section 2 details the HVCCC system, our assumptions, and the problem under consideration. Section 3 elaborates on the methods proposed in this work: subsection 3.1 details an extended version of the greedy algorithm proposed by Savelsbergh and Smith (2014) and subsection 3.2 describes a Genetic Algorithm that exploits this method to obtain better solutions. Section 4 describes our experimental design to test the performance of our methods and Section 5 summarises our conclusions.
2 The HVCC
The three terminals in the Port of Newcastle share a rail network that connects the mines’ load points to the terminals, and a channel that allows the ships to access the terminals. In this work, we optimise system-wide operations, which includes the terminals’ inbound and outbound coal flows, taking into account critical restrictions at a sufficient level of detail to ensure that solutions of practical interest are obtained. These critical restrictions are discussed below.
2.1 The Rail Network
The rail network is the largest part of the supply chain infrastructure, connecting the train load points at open cut mines in the Hunter Valley to the terminals at the Port of Newcastle. For use in our methods, the rail network is modelled as a directed graph, with tonnes of coal flowing from the load points to the terminals along a unique path. In the few cases where there are multiple paths, the shortest path (in terms of number of arcs used) is always preferred. Each arc of the graph represents a relevant rail segment, and has a capacity given in tonnes per day. In practice, coal for a single stockpile is transported from a load point to a terminal using trains of a specific size, and rail capacity is given in both number of trains and tonnes per day. However, for simplicity, we choose to schedule tonnes of coal to be delivered daily, without accounting for the number of trains. Scheduling of actual trains is delegated to the above-rail operators and also includes scheduling of crews, fueling, and maintenance. Figure 2 depicts the relevant rail segments of the Hunter Valley. Travel times are also ignored: coal is assumed to be immediately available at the terminals at the requested times as long as enough rail capacity for the path under consideration is available.
2.2 The Channel
The channel in the Port of Newcastle is quite narrow and shallow, it ranges between about 330 and 670 meters in width and is only about 15m deep in some areas. For that reason, channel traffic must follow strict rules.
First of all, traffic can only happen on the channel in one direction at a time. Vessels are assumed to travel all at the same constant speed, and must be at least 15 minutes apart from each other. Departures and arrivals at a terminal are allowed to happen simultaneously.
Since the channel is relatively narrow, maneuvering on it requires the assistance of tug boats. Therefore, to account for staff limitations, we assume that at most four vessels can be travelling on the channel at any given time.
To avoid damaging their hull due to contact with the rocky bottom of the channel, large vessels (i.e.: with at least 100000t, also referred to as cape sized) can only depart during a tidal window [HT-90min, HT+30min), where HT is the time of the high tide. In this work, we consider real high tide times obtained using third party software111http://www.arachnoid.com/JTides/. Vessels are considered to be capes if their requested load is at least 100kt.
Finally, we assume that vessels take 15 minutes to travel through the channel’s entry area, and 35, 55 and 85 minutes to travel from its end to CCT, KCT and NCT, respectively. The reverse trips also need to be allowed for and take the same amount of time. Figure 3 depicts the relevant channel information.
2.3 The Terminals
In order to obtain realistic solutions that can be used in practice by the HVCCC, certain key aspects, rules and restrictions associated with the system must be observed.
First of all, in any of the terminals, the amount of coal being dumped at the stations in a single day cannot exceed the Daily Inbound Throughput (DIT). Similarly, the amount of coal loaded into vessels cannot exceed the Daily Outbound Throughput (DOT).
Next, split cargoes are typically seen as an undesirable occurrence, and therefore forbidden for the purposes of our methods. In other words, one cargo equals one stockpile. Also, vessels must be loaded in such a way that they maintain their physical balance in the water. Therefore, their stockpiles must be reclaimed in a pre-specified order.
In this work, we distinguish KCT from the other two terminals because it is responsible for the largest volume of coal exported, with bigger inbound and outbound capacities and availability of equipment; making it a key terminal for the system. That also means that optimising operations in the terminal to meet the demands is itself a challenging problem. NCT is responsible for a significant volume of coal exported as well, but HVCCC has only limited visibility and control over their operations, and, as a result, creating a detailed plan is impossible and not useful in practice. CCT is also modelled in coarse detail because only a small volume of coal that passes through it. However, CCT needs to be considered as it does share the rail network and channel with the other two terminals, which are resources with limited capacity.
While for KCT we consider operations in the terminal in detail, for CCT and NCT we only aim to schedule coal deliveries, vessel arrivals and departures, and stockpile build and reclaim periods. How the stockpiles are positioned on the pads, and which equipment is used to build and reclaim the stockpiles is left for the planner to decide.
Since we propose a day-of-execution plan for cargoes in every terminal, to avoid exceedingly long build times, stockpiles can start their stack periods at most ten days prior to their vessels’ Estimated Arrival Time (ETA). Also, because train loads often cannot be delivered at the ideal time due to limited rail capacity, stacking can be preempted (i.e., days with no stacking are allowed after the stacking period has started). Exceedingly large build times are also undesirable, as the stockpiles would occupy scarce pad space that could be used for other stockpiles. Therefore, build periods are restricted to a maximum of seven days. Finally, since train travel times are not considered in our model (i.e., coal is considered to be immediately available at the terminal upon request, subject only to rail capacity and DIT), unrealistically short stack periods are avoided by enforcing a minimum build period of three days. Maximum build times can also depend on the mines that provide coal, e.g., because some of the mines are relatively close to the terminal, a 5 day limit is sufficient and more appropriate.
Reclaiming a stockpile can only start once it is fully built to avoid that reclaiming has to be interrupted because of lack of coal (building the stockpile had not yet finished), forcing a vessel to remain berthed without being loaded. Such a situation is highly undesirable since berths are a very scarce resource.
Since manoeuvring in a narrow channel can be difficult and requires resources that could be used for other purposes, once a ship berths at a terminal, it cannot change berths. However, it is easy to see that, as long as the number of vessels berthed never exceeds the number of berths in the terminal, a first-come-first-served policy ensures that this requirement is satisfied.
2.3.1 The Kooragang Coal Terminal
The stockyard at KCT has four pads, A, B, C, and D, on which cargoes are assembled. Upon arrival at the terminal, a train dumps its contents into one of three stations. The coal is then transported on a conveyor to one of the pads where it is added to a stockpile by a stacker. Since all dump stations can send coal to any stacker stream, in this work we model them as a single dump station with the combined capacity. There are six stackers, two that serve pad A (S316 and S317), two that serve pad B and pad C (S358 and S359), and two that serve pad D (S321 and S322). A single stockpile is built from several train loads over three to seven days, as mentioned in the previous section. After a stockpile is completely built, it may dwell on its pad for some time until its destination vessel has arrived. Stockpiles are reclaimed using a bucket-wheel reclaimer and the coal is transferred to one of the four berths on a conveyor. The coal is then loaded onto the vessel by a ship loader. There are four reclaimers, two that serve pad A and pad B (R459 and R460) and two that serve pad C and pad D (R411 and R412). Figure 4 illustrates KCT’s layout, as modelled in this work.
Pads A, B, C and D are 2142m, 1905m, 2174m and 2156m long, respectively, and we assume that every stockpile occupies the full width of the pad. Equation 1
, obtained by linear regression from actual data, is used to approximate the length of a stockpile, under this assumption, given its tonnage. Naturally, two stockpiles cannot occupy the same space on a pad at the same time (i.e., coal cannot be shared between stockpiles). Also, once the assembly of a stockpile has started, it is rare that the location of the stockpile in the stockyard is changed. Relocating is time-consuming and requires resources that could be used to assemble or reclaim other stockpiles. For this reason, this is forbidden for our purposes. We also assume that the terminal has a DIT of 500kt/day.
We assume that stackers on KCT move at a speed of 1800m/h and operate at a rate of 139.2kt/day, which makes stacker travel and operation times (of the order of minutes) insignificant compared to train cycle times (of the order of days). For this reason, we do not model stacker operations. It is assumed that it will always be possible and straightforward to build the stockpiles within the stipulated stacking period, provided that the terminal’s capacities are respected, and this task is left for the planner. The middle stacker stream has two conveyor belts and has a Daily Stacker Stream Capacity (DSSC) of 288kt/day. That is twice the capacity of the outer stacker streams, which only have one conveyor belt.
We assume that reclaimers at KCT also move at a speed of 1800m/h and operate at a rate of 139.2kt/day. Reclaimers that serve the same pads cannot pass each other, as they travel on rails on the side of a pad. Reclaimers can only load coal from one stockpile at a time, and can only be assigned to stockpiles on pads that they serve. Also, since the terminal has only three ship loaders, only three reclaimers can operate at the same time. Reclaim jobs are not preemptive and, since there is a limiting number of berths, it is desirable that a vessel is never idle whilst berthed and occupying precious space. Therefore, once loading of a vessel has started, it cannot be stopped (in between stockpiles) for more than 5 hours. The combined total reclaimer and ship loader capacities enforce a DOT of 390kt/day.
2.3.2 The NCIG and Carrington Coal Terminals
In this work, we do not model stockyard operations in CCT and NCT. Even though NCT is responsible for a significant volume of coal exported, HVCCC has only limited visibility and control over their operations as NCIG customers, who have contracted dedicated stockpile space at the terminal typically do not disclose their operational data, which makes detailed modeling impossible and impractical. CCT is also modelled in coarse detail because only a small volume of coal that passes through it. However, it is important to consider it in the integrated system because it shares the rail network and channel with the other two terminals, which are limited capacity resources. Figure 5 illustrates CCT’s and NCT’s layout, as modelled in this work.
Even though these terminals support dedicated stockpiles, it is known that most of the real estate owners at the terminals operate their stockpiles in CA mode. Therefore, we approximate their operations under this assumption, and do not model dedicated stockpiles at all.
It is assumed that CCT has a DIT of 96kt/day and a DOT of 94kt/day. NCT is assumed to have a DIT of 228kt/day and a DOT of 214kt/day. CCT features two berths only while NCT has three. We also assume maximum reclaim rates of 2200t/h and 5800t/h for CCT and NCT, respectively.
2.4 Shipping Stems
In this work, shipping stems are used to characterise the input data. A shipping stem is a list of vessels with information on their ETA, the terminal they are headed to, and their cargo details. Cargoes are specified in terms of their coal components, i.e., each cargo specification consists of a list of components with for each component a tonnage and the load point of origin.
3 Algorithms to Optimise the Integrated System
This work describes a method that simultaneously schedules coal arrivals at the dump stations, determines stockpile build and load periods, and schedules arrival and departure times of the vessels. In the case of KCT, which is a key terminal of the system, the method also determines a stockpile placement (pad plus location) and assign a reclaimer (accounting for the fact that two reclaimers operating on the rail track cannot pass each other to reach their designated stockpiles). LABEL:tab:notation summarises the notation used throughout this paper.
|The set of vessels.|
|The Estimated Time of Arrival (ETA) of a vessel.|
|The earliest possible departure time of a vessel.|
|The set of stockpiles of a vessel.|
|The set of components of a stockpile.|
|The set of stockpiles.|
|The tonnage (weight) of a vessel (the total tonnage of its stockpiles).|
|The tonnage (weight) of a stockpile (the total tonnage of its components).|
|The tonnage (weight) of a component.|
|The set of rail segments, i.e., the arcs in the graph shown in Figure 2. For simplicity, throughout this paper, we may also refer to a rail segment by its descriptor rather than its index (i.e.: ).|
|The capacity of a rail segment.|
|The set of high tides.|
|The set of tidal windows. Note that, throughout this work, time is given in hours and we may also refer to the limits of the tidal window as .|
|The tidal window that contains a time or the first tidal window after time , if it is not contained in any tidal window.|
|The highest reclaime rate of a terminal (5800 t/hr except for CCT which as a limit of 2200 t/hr)|
Our objective is to maximise the system’s throughput without causing unacceptable vessel delays. We define the earliest departure time for a vessel as the ideal departure time: as if the vessel could berth at the terminal exactly at its ETA, had all its stockpile pre-built, had a reclaimer ready to load it without interruptions, and could depart immediately after its fully loaded (or at the very beginning of the next high tide, in the case of capes). Let be the beginning of the first tide that happens during or after time , be the highest reclaim rate of a terminal (i.e.: 5800t/h for KCT and NCT, and 2200t/h for CCT) and the total tonnage of the vessel. Equation 2 formulates this concept.
The system’s average vessel delay, formulated by Equation 3, is used as a proxy for maximising the throughput, where denotes the vessel’s actual departure time.
3.1 A Greedy Algorithm for Stockyard Management
The centre piece of our method is an enumeration algorithm to solve the stockyard management problem at a terminal, e.g., the placement, the stacking, and the reclaiming of stockpiles, captured in the SLARS sub-problems for each terminal. The procedure sequentially schedules vessels in a greedy fashion: once a vessel and its stockpiles are scheduled, this decision is only revisited if a feasible solution based on it can not be found. LABEL:algo:slars() details this method. The important question of how to find a good input sequence to the SLARS sub-problems is discussed in Section 3.2 below.
In this work, we only work with feasible solutions, in the sense that every vessel schedule and stockpile placement satisfies every constraint described in Section 2. We refer to an incomplete solution as one in which not all vessels have been scheduled or stockpiles have been placed (yet).
Scheduling a vessel refers to setting the arrival and departure times and , respectively. Since a vessel can only depart once it is fully loaded, that can only be done after all its stockpiles are placed. Placing a stockpile , refers to setting the build and loading periods and , respectively. In order to build a stockpile, we first schedule the delivery of coal for all its components. The latter is referred to as railing, and refers to determining the coal arrivals for every component . We assume that each component comes from a single unique mine, via an unique path on the graph shown in Figure 2. Since reclaiming of a stockpile can only start after it is fully built, the reclaiming period can start only after the railing has been completed. The details for scheduling a vessel and placing the stockpile are given as pseudo-code. Note that LABEL:algo:getPadGaps considers all stockpiles from all ships that are already placed. However the backtracking never needs to go back further than the current ship, as it is always possible to place all stockpiles at the end of the time axis.
In this work, capacities are given on a per-day basis. We consider three such capacities: (1) rail segment capacity, (2) terminal inbound capacity (DIT) and, for KCT, (3) stacking capacity (DSSC).
Railing is always performed in a greedy fashion, for stockpiles and components, in the order the stockpiles have to be loaded into the vessels. There is no particular reason for this ordering (the stockpiles may be built in any order), but this one was chosen for simplicity. We assume that arbitrary amounts of coal can be transferred on any day, subject to available rail capacity. Train travel times are not considered: as long as there is enough capacity, we assume that coal flows instantaneously from a load point to a terminal. Therefore, a coal arrival for a component is to be interpreted as tonnes of coal arriving at a terminal on day .
For a given stockpile, as many tonnes of coal as possible are scheduled to arrive as soon as possible, starting ten days before the vessel’s ETA. LABEL:algo:railing() details this approach.
3.1.2 Channel Traffic
In this work, we assume that vessels travel at the same constant speed. Under this assumption, channel traffic can be controlled by selecting appropriate arrival and departure times.
Assuming that two vessels are headed to the same terminal, the following must hold whenever we refer to the possibility of a vessel traversing the channel, i.e., items (2) and (3) of LABEL:algo:loadingPeriod():
Any two consecutive arrivals must happen at least 15 minutes apart;
Any two consecutive departures must happen at least 15 minutes apart; and
If preceded by a departure, any arrival must wait until the departing vessel has cleared the channel. Therefore, two such consecutive events must be minutes apart, where is the time required for a vessel to traverse the channel from the entry area to its destination terminal.
For simplicity, we assume that a departure may happen at the same time as a preceding arrival: as one vessel arrives at the berth, another leaves (if allowed by the rules above). Figure 6 illustrates these rules.
Without loss of generality, these rules are easily extended to multiple terminals by keeping projected events (arrival or departures) for each of the terminals. Under the assumption that the terminals are located along the channel, that the channel can be represented by a straight line, and the vessels travel at the same constant speed, an arrival is registered at a terminal every time a vessel berths at or passes by it on its way to its destination. Similarly, a departure is registered at a terminal every time a vessel departs from or passes by it on its way back. If the terminal where the event is being recorded is not the vessel’s destination, i.e., it is a terminal encountered before the destination, the event time recorded is the time when the vessel passes the terminal. Naturally, each terminal’s event list must satisfy the channel rules at all times. Figure 7 illustrates this idea. Our methods implement these checks by sequential inspection. It is interesting to note that CCT, the terminal that is most conveniently located on the channel, is the oldest facility with low capacity and hence least used.
3.1.3 Optimising Operations in KCT
Unlike CCT and NCT, for vessels headed to KCT, we also determine a pad and position on the pad for stockpiles and schedule the time the stockpile will occupy the assigned space and when and by which reclaimer the stockpile will be reclaimed. Since there may be multiple ways of doing so, LABEL:algo:slars() enumerates a finite set of possibilities and chooses the one that increases the total vessel delay the least. This is done by observing the geometrical aspects of the pads, in the same way as described by Savelsbergh and Smith (2014). This section describes the relevant procedures and we refer the reader to the original publication for more detailed information.
Under the assumption that the stockpiles occupy the entire width of a (rectangular) pad, the positioning of the stockpiles is uni-dimensional along the length of the pad. Consider a two dimensional plane with time along the horizontal axis and the position on a pad along the vertical axis. In this plane, a stockpile placement can be represented in time and space as a rectangle. In this representation, the coal ground period – which includes stacking, reclaiming, and dwell periods – is defined by horizontal boundaries of this rectangle on the section of the pad limited by its vertical boundaries.
The available time and space to place other stockpiles is then the area that lies outside these rectangles. This area, in turn, can be divided in rectangles of maximal size, referred to as pad gaps. A set of pad gaps is used as an algorithmically convenient structure to represent the available pad space. Figure 8 illustrates this concept, which implemented by LABEL:algo:getPadGaps().
When a reclaimer is assigned to a stockpile, we must make sure the reclaimer is able to reach the stockpile in time. After completing a reclaim job at a certain position of the pad, and assuming that reclaimers travel at a constant speed and that a reclaimer positions itself in the centre of the stockpile it is serving, the locations reachable by a reclaimer can be represented in time and space as a parallelogram, with opposing vertices connecting two reclaim jobs (one vertex at the end of a finishing reclaim job and the other at the beginning of the next reclaim job). This area, illustrated in pink in Figure 10, is referred to as reclaimer gap. A collection of reclaimer gaps is a convenient algorithmic representation of the area reachable by a reclaimer at any time, and LABEL:algo:getReclGaps() implements a function for this structure.
To be stacked and reclaimed a stockpile must be placed in a pad gap, and have its reclaim period inside a reclaimer gap. Therefore, feasible placements require that the pad and reclaimer gaps intersect, and reclaiming can start at any time in this intersection and can be placed in any position in it as well. Since we are interested in earliest reclaim times, we seek a point on the leftmost boundary of the intersection. Since there could be infinitely many points on this boundary, to obtain a finite set of positions to evaluate, we only consider extreme points on the boundary, which are referred to as Critical Heights. Figure 9 illustrates this concept, which is implemented by LABEL:algo:getCriticalHeights().
Since two reclaimers serve each pad and they cannot pass each other, other than being able to reach a stockpile, we must also make sure a reclaimer does not clash with the other reclaimer. Once a reclaim job is assigned to a stockpile, the reclaimer remains stationary, positioned at the centre of stockpile, for the duration of the reclaim operation and therefore blocks any position that lies “behind” it. Assuming that, after a reclaim job, a reclaimer may move back to allow the other reclaimer to reach a position at that position (or further), the area in time and space that is blocked by a reclaimer can be represented as a trapezium, with its smaller base being the position of the stockpile being reclaimed (the blue area in Figure 10). LABEL:algo:getEarliestReclaimTime() implements a function to find the earliest time at which reclaiming can start in this intersection and at the given critical height, accounting for possible reclaimer clashes.
After enumerating every possible placement, LABEL:algo:slars() selects the best one according to the comparator detailed by LABEL:algo:comparePlacements. This would intuitively be the one that yields the earliest reclaim end time. However, we use an improved version proposed by Savelsbergh and Smith (2014), implemented by the LABEL:algo:comparePlacements, that considers the future placement flexibility lost due to possible reclaimer clashes as a “look-ahead” (illustrated by Figure 10), since Savelsbergh and Smith (2014) found that it substantially improved the solutions.
For further information on pad and reclaimer gaps, critical heights and flexibility losses, we refer the reader to Savelsbergh and Smith (2014).
3.2 Obtaining Improved Solutions Using a Genetic Algorithm
Since the main engine we use to obtain good solutions - the algorithm detailed in the previous sections - is a greedy construction heuristic, it bases its decisions on the current state of the solution, e.g., used capacities and allocation of resources. Therefore, the order in which vessels are scheduled impacts the final average delay.
Aiming to increase the search space, to add diversity to our pool of explored solutions, and ultimately to find better schedules, we propose a Genetic Algorithm (GA) that exploits this observation, by exploring different vessel scheduling orders. Since LABEL:algo:slars() is deterministic, an ordering maps to exactly one solution. Under the assumption that good quality solutions share common placements, the proposed GA attempts to preserve interesting schedule traits (that is good subsequences), while proposing changes in different parts.
3.2.1 Genetic algorithm
A GA is a population-based search method that uses principles also found in the theory of evolution to find solutions for complex computational problems (Goldberg and Sastry, 2010). In general, the method starts with a diverse set of solutions - the population - which is then “evolved” through the use of particular operators, i.e., crossover, mutation, fitness calculation, selection and insertion, towards better solutions. The method does not guarantee global optimality but, if designed appropriately, it often leads to high quality solutions in shorter CPU times than required by exact methods. Algorithm 1 illustrates the concept.
The first line of Algorithm 1 represents the creation of an initial population of random sequences and assigning an objective function value (fitness) to each of them by calling LABEL:algo:slars() for each sequence. In the next line, the algorithm enters its main loop, which runs for a pre-specified number of times. The random initial population is strongly clustered around the turn-of-arrival ordering to allow the GA to converge in a reasonable amount of time.
The main loop starts by calling a procedure to update the population structure. We use a single, small population organised in a ternary heap in which crossover only happens between adjacent nodes. This population structure helps to keep the number of individuals (and therefore, calls of LABEL:algo:slars()) low, and enforces convergence (Buriol et al, 2014). That procedure is followed by the generation loop, which comprises four methods called in sequence: crossover, mutation, fitness calculation and insertion. Each generation iteration generates 16 independent new SLARS subproblems (one for each physical core on our test machine). These are evenly distributed amongst all free available processors and solved independently before the tree is updated.
After a generation ends, a convergence check is performed to avoid spending too much time evolving a population composed of sequences that are too similar. If it has converged, the algorithm triggers an elitist restart procedure: it recreates the population at random, but preserving the sequence that led to the best known solution. When the algorithm reaches its maximum number of generations, it stops and reports the best solution found so far.
4 Computational Results
To test the performance of our method, we used ten instances created with the stem generator developed by Boland et al (2013). An instance specifies a vessel arrival stream for a period of 100 days plus one year. The first 100 days are to be used as a warm-up period to ensure that the system has reached its “normal” state before performance statistics are gathered. Due to the confidentiality agreement that is in place with our industry partners, we are not allowed to provide the instances or the parameters used to generate them. Table 2 summarises some of the instances characteristics.
|ETAs (h)||Weight (t)|
We provide and discuss results obtained with the GA detailed in subsection 3.2 and with the Constraint Programming (CP) approach of Belov et al (2014). Their method is also designed to solve the problem described in Section 2, and differs from our method in that it schedules the activities in NCT and CCT in more detail, i.e., it also decides stockpile locations and schedules stacking and reclaiming. Because of the smaller volume of coal handled by these terminals and because the bottlenecks of the logistics system are more likely to be related to transport capacity (Belov et al, 2014), the average vessel delays should be comparable.
Additionally, we consider results obtained with a simple randomized Multi-Start (MS) heuristic to highlight the effectiveness the proposed GA scheme. The MS algorithm simply runs LABEL:algo:slars() using random sequences obtained in a similar manner as done in LABEL:algo:initializePopulation(), dispatching one such run to a CPU as it becomes available, and reports the best solution found. A reduced chance of swapping positions, 30% rather than 50%, is used to ensure that the obtained sequences do not deviate too much from the increasing ETA order, which is known to be good. Given the vast search space, focussing the search in the vicinity of the chronological ordering is essential for finding good solutions in a reasonable amount of time. See Singh et al (2012) where it was shown for a similarly large problem arising from the same supply chain, that a GA without such a targeted search is not competitive.
The tests with the GA and MS algorithms were ran on dual octa core 3.33GHz Intel Xeon E5-2667 v2 processors with 256GB RAM. For both these algorithms memory consumption was negligible (only a few MBs). The runs with CP were performed on an octa core 3.40GHz Intel Core i7-2600 with 8GB RAM and kindly provided by the original authors of Belov et al (2014). The authors reported a memory usage of around 150MB for CP. It is also noteworthy that CP runs on a single thread.
Since both GA and MS have a random component, the reported results are the average of ten runs. CP, on the other hand, is deterministic and a single run is reported.
In the following experiments, each run of the GA includes 100 generations. Since every generation calls LABEL:algo:slars() 16 times, in the following experiments 1600 iterations of MS are performed to match the number of LABEL:algo:slars() calls.
|Instance||Avg (h)||Std Dev||Avg (h)||Std Dev||Delay (h)|
|Instance||Avg (s)||Std Dev||Avg (s)||Std Dev||Time (s)|
In Table 3 we can see that GA also always provided better solutions than CP. This is possibly a result of the fact that GA considers a simplified model for CCT and NCT. Even though less restrictive, our model should be appropriate because most of the operational decisions at these terminals are made by the (independent) controllers of the companies that own specific terminal stockyard space, and little is known about their control strategies. The results in Belov et al (2014) also show that operations at CCT and NCT are not very restrictive for the system and that the biggest bottleneck for optimising the system are operations related to the channel. On the other hand, our model does not discretise time, which could lead to a better use (in time and space) of the pads at KCT, and the proposed GA always considers all vessels as a group, while CP optimises over a rolling visibility window of 15 vessels at a time, which may allow GA to explore options not available to CP.
The results reported in Table 4 clearly show that both MS and GA vastly outperform CP in running time, finishing 100 generations of GA and 1600 re-starts of MS within a few minutes while CP often required a few hours of computation time. The significant improvement in terms of running times is due to a combination of the efficient implementation of the SLARS solver, and a fast convergence parallel GA framework. The fact that the average delays obtained by the GA are significantly lower than those obtained with a random multi-start procedure, support the fact that the GA was actually effective at guiding the search towards high quality solutions. Because the proposed model does not discretise time, relatively few decision variables are considered. The enumeration tree under consideration is also aggressively pruned by the hierarchical structure of the population, and crossover operator, which considers a consensus between arguably good solutions. Finally, the use of parallelism also helped to speed up the generation loops.
Looking further in Table 4, even though GA required twice the time to solve the same number of problems as MS, the results are always better. The bigger running times are due to the fact that GA has a synchronisation step at the end of each generation, to ensure that the solution hierarchy is consistent for that generation, never containing individuals who belong to previous or following generations. The significant decrease in average delay, however, suggests that the GA was successful in exploiting common interesting solution characteristics to guide the search towards better solutions, and not wasting iterations solving sub-problems that were already solved before.
Time to Target plots. The above figure shows the cumulative probabilities that the GA will reach or outperform the average vessel delays obtained by CP (as shown onTable 3), for each of the instances in our test bed. These follow the methodology proposed by Aiex et al (2007), and depict 50 runs of the GA for each instance.
Figure 12 illustrates the probable times that GA will need to match CP’s results using Time-To-Target plots (TTTplots, as proposed by Aiex et al (2007)) generated with results from 50 runs for each of the instances, with the target set to that obtained by CP (as seen on Table 3). Figure 12 shows that GA matches or provides a better solution in every instance of our test bed, using only a fraction of the time required by CP.
In this work, we propose high performance, cost effective (in the sense that they do not rely on external solvers), algorithm to simultaneously schedule train and vessel arrivals and stockpile build and load periods in a system with three coal export terminals, whilst also positioning the stockpiles on specific pads and scheduling the reclaimers for the largest terminal. Given the fast run times and the modeling detail, it can be used by HVCCC to support both strategic and tactical decision making, allowing the analysis of various “what if” scenarios. Such scenarios would include different resource capacities and, more importantly, different stems (vessel arrival streams). The study of the obtained solution helps identify current (or future) bottlenecks in the coal chain and its operations, insight in how to improve throughput with minimal investment with a quantitative estimate of added value.
Our carefully implemented and tuned method outperforms the CP approach of Belov et al (2014), the best-performing method up to now, improving the best-known solutions for every instance in our test bed in less than one tenth of the time used by the CP approach.
Possible further work includes the development of more efficient asynchronous parallelisation strategies, to reduce waiting periods, and more intensive searches on pad placement, e.g., exploring different suboptimal placements during the greedy procedure, which may help to make convergence to better solutions faster.
- Aiex et al (2007) Aiex RM, Resende MG, Ribeiro CC (2007) TTT plots: a perl program to create time-to-target plots. Optimization Letters 1(4):355–366
- Belov et al (2014) Belov G, Boland N, Savelsbergh M, Stuckey P (2014) Local search for a cargo assembly problem. In: Simonis H (ed) Integration of AI and OR Techniques in Constraint Programming, Lecture Notes in Computer Science, vol 8451, Springer International Publishing, Cham, pp 159–175, DOI 10.1007/978-3-319-07046-9
- Boland and Savelsbergh (2012) Boland N, Savelsbergh M (2012) Optimizing the Hunter Valley Coal Chain. In: Gurnani H, Mehrotra A, Ray S (eds) Supply Chain Disruptions, Springer London, London, pp 275—-302, DOI 10.1007/978-0-85729-778-5
- Boland et al (2011) Boland N, Gulczynski D, Jackson M, Savelsbergh M, Tam M (2011) Improved stockyard management strategies for coal export terminals at Newcastle. In: 19th International Congress on Modelling and Simulation (MODSIM), Perth, pp 718–724
- Boland et al (2012) Boland N, Gulczynski D, Savelsbergh M (2012) A stockyard planning problem. EURO Journal on Transportation and Logistics 1(3):197–236, DOI 10.1007/s13676-012-0011-z
- Boland et al (2013) Boland N, Savelsbergh M, Waterer H (2013) Shipping Data Generation for the Hunter Valley Coal Chain. Tech. rep., Optimization Online
- Buriol et al (2014) Buriol L, Franca P, Moscato P (2014) A New Memetic Algorithm for the Asymmetric Traveling Salesman Problem. Journal of Heuristics 10:483–506
- Goldberg and Sastry (2010) Goldberg D, Sastry K (2010) Genetic Algorithms: The Design of Innovation, 2nd edn. Springer, USA
- Savelsbergh and Smith (2014) Savelsbergh M, Smith O (2014) Cargo assembly planning. EURO Journal on Transportation and Logistics DOI 10.1007/s13676-014-0048-2
- Singh et al (2012) Singh G, Sier D, Ernst AT, Gavriliouk O, Oyston R, Giles T, Welgama P (2012) A mixed integer programming model for long term capacity expansion planning: A case study from he Hunter Valley Coal Chain. European Journal of Operational Research 220(1):210–224
- Thomas et al (2013) Thomas A, Singh G, Krishnamoorthy M, Venkateswaran J (2013) Distributed optimisation method for multi-resource constrained scheduling in coal supply chains. International Journal of Production Research 51(9):2740–2759, DOI 10.1080/00207543.2012.737955