1 Introduction
The Multiobjective Shortest Path (MOSP) Problem is an extension of the classical Shortest Path Problem in which every arc bears a multidimensional vector of cost attributes. Then, the multidimensional costs of a path are the sum of the cost vectors along its arcs. A model for shortest path problems in which the relevant cost components for users or companies are considered separately is a powerful tool that motivates research efforts made in the field. In contrast to its single criteria sibling, the classical Shortest Path Problem, MOSP problems are hard problems theoretically. However, many MOSP models arising from real world scenarios can be solved efficiently in practice. In recent years, big advances have been published in the field. SedeñoNoda and Colebrook
[12] introduced a new Biobjective Dijkstra Algorithm (BDA) for the Biobjective Shortest Path Problem that improved the known complexity bounds for this problem and could also be proven to be very efficient in practice. Maristany et al. [7] introduced the Multiobjective Dijkstra Algorithm (MDA). It generalizes the BDA to the multidimensional case and the obtained running time and space complexity bounds improve the state of the art in theory and practice.Often, the OnetoOne MOSP Problem is considered. In this problem variant, the output is expected to be a set of optimal paths between two designated nodes in the input graph. For this scenario, general MOSP algorithms can be equipped with different pruning techniques that can help to discard irrelevant paths early in the search (e.g., [9], [3], [12], [7]). In addition, the search can be guided towards the target, as done by Mandow and Pérez de la Cruz [6, 5]. This technique resembles the A* algorithm known from the classical OnetoOne Shortest Path Problem: paths are not processed according to their costs but according to the sum of their costs and a feasible heuristic value. For each node in the graph, the latter shall underestimate the costs of any path from that node to the designated target node. Formally speaking, the heuristic values build a multidimensional node potential. This potential is then used to build the reduced costs of the original arccost vectors. The resulting (reduced) costs then define a new arc costfunction and hence a new MOSP instance in which paths ending at nodes with a small heuristic value are favored. The algorithm designed by Mandow and Pérez de la Cruz [5] is based on the classical labelsetting algorithm for MOSP by Martins [8].
In Section 2 we give a formal definition of the OnetoOne Multiobjective Shortest Path Problem. In Section 3.2 we describe the Multiobjective Dijkstra (MD) algorithm, prove its correctness, and analyze its theoretical space and running time complexity bounds as the main contribution of this paper. The MD algorithm is based on the MDA and is designed with practical performance in mind. Theoretically, its space complexity bound is worse than that of the MDA but the running time complexity bound of both algorithms is the same. On the practical side, the increased memory demand leads to improved running times, as shown in Section 4. To the best of our knowledge this is the first time that extensive computational experiments for a multiobjective like algorithm are discussed in the literature.
2 OnetoOne Multiobjective Shortest Path Problem
Consider a digraph . Given multidimensional arc cost vectors , , for every , the costs of a path are defined as the sum of its arc costs, i.e., . For a set of paths, we use to refer to the set of costs of all paths in , i.e., . The following definition formalizes the notion of optimality in the multiobjective case.
Definition 1 (Dominance and Efficiency).
Let and , two paths in . Then, dominates if for all and there is a s.t. . is called efficient if there is no other path in that dominates it. In this case, the cost vector of is called nondominated.
Let be the set of all efficient paths in . In general, for any nondominated cost vector , there could be multiple paths with . In this paper, we want to find a minimum complete set of efficient paths, i.e., a set of paths such that for every nondominated cost vector in , the presented algorithms output exactly one efficient path.
Definition 2 (OnetoOne Multiobjective Shortest Path Problem).
Given a digraph , nodes , and dimensional arc costs vectors , , the OnetoOne Multiobjective Shortest Path Problem is to find a minimum complete set of efficient paths.
3 Multiobjective Dijkstra A*
We introduce the Multiobjective Dijkstra A* (MD) algorithm. It solves the OnetoOne Multiobjective Shortest Path Problem. In Section 3.1, we introduce some concepts needed for the algorithm. In Section 3.2, we introduce the algorithm itself before proving its correctness in Section 3.3. Whenever we write for vectors , , we mean for all .
3.1 Heuristics and Path Pruning
The following definitions and results are a direct generalization of those valid for the single criteria Shortest Path Problem.
Definition 3 (Admissible Heuristic).
A multiobjective heuristic function is admissible if for a designated target node and for all efficient paths
Definition 4 (Monotone Heuristic).
Let be a MOSP instance. A multiobjective heuristic function is a monotone heuristic if

and

.
The notation
to refer to the potential/heuristics is motivated by the dual linear program of the single criteria Shortest Path Problem. There, the set of feasible solutions is the set of monotone heuristics and these are often denoted by
.Lemma 5.
A monotone heuristic is also admissible.
Let be an path for some and an heuristic. For the remainder of the paper, we use the notation .
Lemma 6.
Let be a MOSP instance and a monotone heuristic. Assume that node comes before node along any simple path starting at . Then, the following relation holds:
Proof.
Since is a monotone heuristic, we have
∎
3.2 The Algorithm.
The MD algorithm is an adaptation of the OnetoOne Multiobjective Dijkstra Algorithm [7]. In addition to a OnetoOne MOSP instance, the algorithm requires a monotone heuristic and an upper bound on the costs of any efficient path as part of its input. Paths are processed in lexicographic increasing order w.r.t. rather than w.r.t. until the output, a minimum complete set of efficient paths, is computed. To ensure that the output only contains one efficient path per nondominated cost vector, we use the following reflexive binary relation rather than dominance checks as in Definition 1.
Definition 7 ().
Given two cost vectors , we write if and only if . Let . Then we write if there exists an such that .
For two vectors , we write if and only if is lexicographically smaller than .
For any node , we consider two types of paths. The set contains already found efficient paths. Any path in is called permanent. The second type of paths are explored paths. Let be an path with as its last arc. is called an explored path if its subpath is already permanent and itself has been already considered by the algorithm without having been able to decide whether is a relevant subpath of an efficient path that will be part of the output.
In the first lines of the MD algorithm, we initialize the data structures:

Line 1: The priority queue that is initially empty stores explored paths during the algorithm’s execution. The paths in are sorted lexicographically w.r.t. .

Line 1: The lists of efficient paths . These lists are initially empty.

Line 1: The lists of explored paths via for every arc . These lists are initially empty. During the algorithm they contain explored paths that are not in and whose last arc is .
The last step before the main loop of the algorithm is to insert the empty path into . represents a path from to itself with zero costs. We write to denote the set of cost vectors of the efficient paths in . Then, we use the notation to check if an path is dominated by a permanent path or if there is such a path with equivalent costs to those of .
The main loop of the MD algorithm breaks when is empty. Until then, every iteration performs three tasks.
Extraction and storage.
The lexicographically minimal path is extracted from and inserted into , i.e., is made permanent.
Propagation.
The extracted path is propagated along the arcs . For such an arc , we call the resulting path . In case is dominated by , by an element in , or by an element in , the new path is no longer considered. If turns out to not be dominated, we distinguish several cases:

If there is an path and , then is inserted at the end of the list of promising paths via .

If there is an path and , then is inserted via a decrease key operation into . Assume is the last arc of . is inserted at the beginning of the list of promising paths via . I.e., even though is no longer in the priority queue , it might reenter it later.

If there is no path in , is inserted into .
algocf[htbp]
Next candidate path.
A key invariant in the MD algorithm is that at any point in time during the execution, contains at most one path for any . If contains any path , has to fulfill:

is not a permanent path.

The cost vector is not dominated by any cost vector in .

The cost vector is not dominated by any cost vector in .

is lexicographically minimal w.r.t. among all explored and nonpermanent paths.
After is extracted from , it is made permanent, hence there is no path in . The procedure LABEL:algo:ncla* aims to find a new path fulfilling the conditions listed above. The search is performed among the lists of promising paths via , . These lists are sorted in lexicographically increasing order w.r.t. (see Lemma 9)). For every predecessor node of , LABEL:algo:ncla* explores the paths in starting with the first one, since it is lexicographically minimal w.r.t. in the list. Paths in are removed until the first path in the list fulfilling conditions (i)(iii) from the enumeration above is found, if such a path exist. Thus, LABEL:algo:ncla* identifies at most one candidate path via each predecessor node of . Overall, if there is at least one candidate, the path returned by the search is a lexicographically smallest one among them. then fulfills the listed conditions and is inserted into .
algocf[htbp]
3.3 Correctness
We prove the correctness of the MD algorithm in three steps. First, we show that at any node , the elements of the list of paths are sorted in lexicographically increasing order w.r.t. and w.r.t. (Lemma 10). Secondly, we prove that every path that is extracted from the priority queue is efficient (Lemma 11). Lastly, the correctness will follow from Theorem 12. It states that the set corresponds to a minimum complete set of efficient paths at the end of the algorithm.
Lemma 8.
Proof.
Let be the path extracted from in iteration of the MD algorithm. Assume that in a prior iteration , a path was extracted from such that
The MD algorithm extracts a lexicographically minimal path among those in in every iteration. Thus, can not be in in iteration . However, there must have be at least one node with an path in when is extracted. Let be the last such node along . By Lemma 6 we know that the subpath of is lexicographically smaller than . Thus, we must have because otherwise we would have
Since and are in in iteration , the last inequalities would contradict ’s extraction.
Recall that is an explored path but it is not in at iteration since there is at most one path in at any point in time. We analyze the different cases that could lead to being in instead of .

is in at some point but is replaced via Line LABEL:algo:propagate:decrease of LABEL:algo:propagate. This can not happen since the condition in Line LABEL:algo:propagate:flexCheck is necessary for this decrease key operation. Line LABEL:algo:propagate:flexCheck checks if the new candidate path is lexicographically smaller than . Since we have , the replacement would not take place.

is in in the iteration wherein is explored for the first time. In this situation, Line LABEL:algo:propagate:decrease of LABEL:algo:propagate is reached and would replace in . After this happens, only leaves if it is either extracted or if a newly explored path is lexicographically smaller. Thus, it can not happen that is extracted before from .

In some iteration before iteration , an path is extracted from and a new candidate path is searched in LABEL:algo:ncla*. If is the last arc of and is the last arc of , we can assume that and are in the lists and respectively when LABEL:algo:ncla* is called. Since is in in the current iteration and will be extracted from at some point (if not, would not be extracted later), both paths do not fulfill the pruning conditions in Line LABEL:algo:ncla*:pruneatfront of LABEL:algo:ncla* and they fulfill the condition in Line LABEL:algo:ncla*:domCheck of LABEL:algo:ncla*. However, the path computed by the search is a lexicographic minimum among the feasible candidates. Given that the ordering of the lists is assumed in the statement from the lemma, it can not happen that at the end of the search if is in as assumed.
We conclude that the existence of the explored path avoids or any other path that is lexicographically greater than from being in in iteration . However, there must be an path in during this iteration. This implies that would not be extracted from since the path in is lexicographically smaller. ∎
Lemma 9.
Proof.
Clearly, as long as the lists contain at most one element, they are sorted. Let the current iteration be the first one in which a second path is added to an list, say , .
Suppose the new path is added at the beginning of in Line LABEL:algo:propagate:pushFront of LABEL:algo:propagate. Let be the path that is already in . Regardless of whether or was explored first (i.e., regardless of which of the two paths entered first), was finally inserted into because it is lexicographically smaller than w.r.t. . Thus, now adding to the beginning of the list does not alter its ordering.
The second option is that the new path is added in Line LABEL:algo:propagate:pushBack of LABEL:algo:propagate at the end of . Thus is explored for the first time in the current situation but does not enter because a lexicographically smaller path is already in . By construction, is the last arc of and of the path that is already in . is being explored for the first time, which implies that its subpath is extracted from in the current iteration. Recall that until now, the lists were sorted and thus, Lemma 8 guarantees that paths are extracted from in lexicographically increasing order w.r.t. . As a consequence, the subpath of that must have been extracted before is lexicographically smaller than . Since both subpaths are expanded along to obtain and , the same lexicographic ordering holds for and . Thus, adding after to the list does not alter its ordering.
Clearly, we can repeat the same arguments inductively (first time an list gets a third, fourth, … element) to prove the lemma. ∎
The last two lemmas show that the ordering of the lists and of the priority queue influence each other. We can combine both statements to obtain the following key lemma for the correctness of the MD algorithm.
Lemma 10.
Proof.
Lemma 8 trivially holds until the first iteration wherein an list gets a second path. Then, Lemma 9 guarantees that this path is inserted correctly into its list, which then satisfies the condition of Lemma 8 to guarantee that in the upcoming iterations, paths are still extracted in lexicographically non decreasing order w.r.t. . Repeating these arguments until the last iteration of the MD algorithm proves the lemma. ∎
As a consequence of the last lemma, the permanent paths in are also sorted in lexicographically nondecreasing order w.r.t. . Moreover, since for any two paths and
is true, these sets are also ordered in lexicographically nondecreasing order w.r.t. .
To finally prove the correctness of Algorithm 1, we still need show that all permanent paths are efficient (Lemma 11) and that the set is a minimum complete set of efficient paths at the end of the Algorithm 1 (Theorem 12).
Lemma 11.
Proof.
We distinguish three cases:

Suppose is dominated by a path that was previously added to . We show that in this case, would not be extracted from . By Lemma 10, we know that since was extracted from and thus inserted into before was extracted from . Since contains at most one path at a time, there is no path in directly after is extracted. In particular, is inserted into in Line LABEL:algo:propagate:insert of LABEL:algo:propagate or in Line 1 of Algorithm 1 after is inserted into . But the insertion into only happens if in Line LABEL:algo:propagate:domCheck of LABEL:algo:propagate or in Line LABEL:algo:ncla*:domCheck of LABEL:algo:ncla* if is guaranteed. Hence, the path would not be inserted into and thus, also not extracted from .

Suppose there exists an unexplored path that dominates , i.e., . Then, we also have . By Lemma 6, each subpath of is lexicographically smaller w.r.t. than . is unexplored because the extension of one of its subpaths along the arc was discarded. This could happen if is dominated by some path in but due to subpath optimality this would contradict being efficient. The other reason for which might be ignored is because it does not fulfill the pruning conditions in Line LABEL:algo:ncla*:pruneatfront of LABEL:algo:ncla* and LABEL:algo:propagate:pruneatfront of LABEL:algo:propagate. However, we have that
Thus would also violate the pruning conditions and would not have entered prior to its extraction.

By Lemma 10, all paths extracted after are lexicographically greater w.r.t. and thus also w.r.t. . Hence no such path can dominate .
Thus, any path that is extracted from is an efficient path. ∎
Theorem 12.
At the end of Algorithm 1, the set is a minimum complete set of efficient paths.
Proof.
By Lemma 11 it is clear that contains only efficient paths. Let be a nondominated point.
It is easy to see that does not contain two paths and with . W.l.o.g., assume to be added to first. Then, whenever is processed as a candidate to enter , the dominance checks in Line LABEL:algo:propagate:domCheck of LABEL:algo:propagate or in Line LABEL:algo:ncla*:domCheck of LABEL:algo:ncla* would fail since the dominance relation is irreflexive. Thus, can not be inserted into .
Now assume there is no path such that . Since is a nondominated point, there exists at least one path in with costs . Then, at the end of Algorithm 1, there is exactly one arc along such that the subpath of is in the corresponding set and there is no path in with costs , where is the subpath of . The extreme case would be .
Since the node potentials are admissible, we have
Thus, since is a nondominated point, it must be true that . This implies that the pruning conditions checked in Line LABEL:algo:propagate:pruneatfront of LABEL:algo:propagate and in Line LABEL:algo:ncla*:pruneatfront of LABEL:algo:ncla* will not avoid as a candidate to enter . Finally, since is an efficient path, is also an efficient path by subpath optimality. As a consequence of Lemma 11, the dominance checks in Line LABEL:algo:propagate:domCheck of LABEL:algo:propagate and in Line LABEL:algo:ncla*:domCheck of LABEL:algo:ncla* will not avoid of being processed unless there is a different path in with equivalent costs. In any case, this is a contradiction to the existence of a pair of nodes and as described earlier. ∎
3.4 The complexity of the searches for next candidate paths
Let and be the number of nodes and arc in , respectively. Additionally, we set to be the total number of extracted paths during the MD algorithm and the maximum cardinality of a list of permanent paths at the end of the algorithm.
In the original MDA (cf. [12], [7]), the propagate and nextCandidatePath routines worked slightly different than in this paper. There were no lists for every arc . Thus, in propagate, a newly explored path that would not be added to the priority queue because a lexicographically smaller path was already in would be ignored. was not discarded forever: the next time an path would be extracted from , the original nextCandidatePath procedure would iterate over the permanent paths of the predecessors of and expand these paths along the incoming arcs of . During these expansions the path would be rebuild and reconsidered as a candidate path to enter the queue.
The MDA worked correctly as described in the last paragraph. For every arc , it stored an index that told the nextCandidatePath searches at which path in the list of permanent paths to start looking for a new path. In LABEL:algo:ncla*, this index is equivalent to the beginning of the list , since we delete dominated paths whose last arc is in Line LABEL:algo:ncla*:delete of LABEL:algo:ncla*.
Both methods are different regarding the space complexity. Storing indices is done in . As already mentioned, the lists avoid having to repeat the extension of explored paths from their permanent predecessor paths. Thus, for every node , we do not only store its permanent paths in the list : for each outgoing arc of , the list stores the expansions of the permanent paths towards that are still nondominated. In the worst case, these are the expansions of all paths in . Thus, if is the maximum cardinality of a list of permanent paths, the lists require an additional storage space of .
Using the lists does not impact the running time bounds for the original MDA or the MD. In the original MDA the explored paths need to be expanded from the permanent predecessor paths in every call to nextCandidatePath. The required sums to build the new cost vectors can be neglected when analyzing the running time. The lists need to remain sorted, but as shown in Lemma 9, their ordering can be maintained by always adding new paths either at the beginning or at the end of the lists. Thus, there is no extra computational effort required to sort the lists.
In contrast to the OneToAll case, it is not possible to design an algorithm for the OnetoOne MOSP Problem that runs in polynomial time w.r.t. to the input size and the output size of any given instance (cf. [1]). Still, it is worth noting that the running time bounds derived in [12] and [7] for the MDA are still valid for the new MD despite of the increased memory consumption caused by the lists. In the biobjective case (), the MD algorithm runs in and for the multiobjective case (), the running time bound is .
3.5 Obtaining Monotone Heuristics and Bounds on the Paths’ Costs
The MD algorithm requires a monotone node potential and an upper bound as part of its input. In this section we show how to obtain both given a OnetoOne MOSP instance with dimensional arc costs.
For a set let be the set of nondominated vectors in . Then, the ideal point of
and the nadir point of
are a lower and an upper bound on the values of the efficient vectors in respectively (cf. [4]).
Definition 13 (Set of Preprocessing Orders).
A set is called a set of preprocessing orders if


All elements in are permutations of .

For every there is exactly one permutation vector s.t. .
For any , we denote the sets of efficient paths in by . Moreover, given a permutation of , we call a toall lexicographic Dijkstra query a run of the classical onetoall Dijkstra algorithm (cf. [2]) on the reversed digraph of rooted at if it processes the explored paths in lexicographic order w.r.t. .
Theorem 14.
Given a OnetoOne MOSP instance and a set of preprocessing orders , the ideal points of all sets can be computed by running a lexicographic toall Dijkstra query for every .
Proof.
For any and , let be the costs of the cost minimal path found during the lexicographic toall Dijkstra query. Since this query minimizes the costs of a path according to the first cost component in , is the cost vector of a cost minimal path w.r.t. . Since is a set of preprocessing orders, the lexicograhic Dijkstra queries compute one cost minimal path w.r.t. to each cost dimension in . Arranging the cost components correctly, we can construct the ideal point of for all after the lexicographic Dijkstra queries. ∎
Example 15.
Table 1 shows how the ideal point for is build after running lexicographic Dijkstra queries, one for each element of a set of preprocessing orders .
The following statement makes clear why we need the ideal points of the sets .
Proposition 16.
The potential , is a monotone heuristic.
Computing the nadir point of the sets can not be done with only lexicographic Dijkstra queries. Instead, such queries would be needed. We want to avoid this superpolynomial effort in a preprocessing phase of the MD algorithm. However, the queries ran to obtain the ideal points of can also be used to construct a feasible upper bound on the set of efficient paths in .
Proposition 17 ([4], Section 2.2).
The point
is a feasible upper bound on the set of efficient paths in .
Following the lemma, we can set to obtain the desired upper bound on the set of efficient paths. The following example explains why we need to increase to every dimension of to obtain a feasible strict upper bound.
Example 18 ( can be tight).
Suppose the graph only contains the nodes and and an arc connecting both. Regardless of the chosen set of preprocessing orders , we have and is the cost of the only efficient path in .
All in all, we have shown how lexicographic Dijkstra queries in a preprocessing phase of the MD algorithm, are enough to compute a monotone heuristic and a feasible strict upper bound for any OnetoOne MOSP instance. Both are used in Line LABEL:algo:propagate:pruneatfront of LABEL:algo:propagate and Line LABEL:algo:ncla*:pruneatfront of LABEL:algo:ncla* to discard provably irrelevant paths before they are further expanded.
4 Experiments
4.1 Benchmark Algorithms
In Section 4.3, we test the practical performance of the MD algorithm against the Multiobjective Dijkstra Algorithm (MDA) introduced in [7]. The MDA is originally a onetoall MOSP algorithm. Using some pruning techniques it is adapted to perform well in the OnetoOne scenario. We compare the MDA and the MD algorithm. As discussed in Section 3.4, there are two ways of managing the lists of explored paths used in the nextCandidatePath searches. We chose to implement both algorithms using the lists introduced in this paper. Even though the required space complexity is much higher in theory, the additional memory consumption in our implementation was not remarkable. The implementation of both algorithms using lists made them faster, which motivated our decision.
The algorithms are implemented in C++. We used the gcc 7.5.0 compiler with the optimization flag O3 to build the binaries. The experiments were performed on a computer with an Intel Xeon CPU E52670 v2 2.50GHz processor. The time limit for every instance was set to .
4.2 Instance Description
NetMaker Instances
Netmaker graphs are synthetic graphs with to nodes and to arcs. In every such graph, all nodes are connected via a Hamiltonian cycle to ensure connectivity. Then, shortcuts between the nodes along the cycle are added randomly. Arcs have dimensional costs between and . For any arc there is always a cost component with costs between and , a cost component with costs between and , and a cost component between and . These graphs have been used in many publications, e.g., [13], [10], and [11]. We refer to them to get more details about the generation of costs and arcs. We group the NetMaker graphs into groups. Each group is defined by the number of nodes of the graphs in it. Within each groups, graphs have varying densities, i.e., different number of arcs. We use the same pairs as those used in [7]. This means that for each graph, we consider randomly generated pairs. A group of graphs in which all graphs have nodes is referred to as NetMaker .
Road Networks
We use the well known road networks of (parts of) the United States available via the DIMACS Implementation Challenge on Shortest Paths. The original instances are twodimensional: each arc in the network represents a street and the costs of the arc are the street’s length and traversal time. We add a third cost component that is equal to one on every arc. In Table 2 we list the sizes of the used networks. For every network we utilize randomly generated pairs.
Instance Name  Nodes  Arcs  Considered pairs 
NY  20  
BAY  
COL  
FLA  
NE  
LKS 
4.3 MDA vs MDA*
Like the MD, the MDA also tries to discard provably irrelevant paths early during the algorithm. To do so, it uses the upper bound and the front of already found efficient paths. The pruning using the upper bound as in Line LABEL:algo:propagate:pruneatfront of LABEL:algo:propagate and Line LABEL:algo:ncla*:pruneatfront of LABEL:algo:ncla* is usually not very aggressive since the upper bound is weak in most practical instances. Clearly, as long as remains empty, it can not be used to discard paths. Hence, to efficiently discard irrelevant paths, it is crucial that starts to fill as early as possible. Any path therein provides a better bound than the . This is precisely what the monotone heuristic is used for: in the MD, the first paths are extracted and added to much earlier than in the MDA because the modified path costs guide the search towards the target. This general remark explains the superiority of the MD algorithm when compared with the MDA in our experiments.
Table 3, shows an overview of the results obtained from the NetMaker instances. denotes the cardinality of the solution set and can be used as a guideline to understand the size of the considered instances. Both algorithms managed to solve all instances within the time limit. The implementation of the MDA has improved compared to the one used in [7]. This can be seen by looking at the NetMaker and the NetMaker instances. In [7], the MDA solved all instances from these groups but was slower than the current MDA implementation, even though a faster processor was used. Despite this improvement, the MD is to times faster on average than the MDA. The speedup increases with the size of the networks. On the biggest instances from the NetMaker and the NetMaker the maximum speedup is more than orders of magnitude. Figure 1 to Figure 6 show how the running times of both algorithms behave in each group of NetMaker instances.
MDA solved  MD solved  MDA  MD 


NetMaker 5000  240  240  Min.  2.00  0.00  0.00  0.64  
Avg.  1142.58  3.08  1.43  2.16  
Max.  7730.00  291.22  177.95  9.17  
NetMaker 10000  220  220  Min.  11.00  0.00  0.00  0.60  
Avg.  1477.02  8.46  2.86  2.95  
Max.  6679.00  980.52  630.29  44.29  
NetMaker 15000  240  240  Min.  86.00  0.01  0.00  0.53  
Avg.  1726.07  19.22  5.95  3.23  
Max.  7227.00  2855.07  2371.97  58.13  
NetMaker 20000  240  240  Min.  39.00  0.00  0.01  0.51  
Avg.  1890.36  30.18  10.14  2.98  
Max.  8907.00  4025.96  2601.10  27.14  
NetMaker 25000  200  200  Min.  13.00  0.00  0.00  0.73  
Avg.  1830.56  33.09  8.97  3.69  
Max.  9584.00  5620.83  4256.61  100.73  
NetMaker 30000  240  240  Min.  36.00  0.01  0.01  1.08  
Avg.  2091.42  49.31  12.70  3.88  
Max.  11566.00  6952.06  2664.96  160.67 
In Table 4 we summarize the comparison of the MDA and the MD algorithm on the considered road networks. If one of the algorithms does not solve an instance within the time limit, we set the corresponding solution time to the time limit (). Moreover, we only consider instances that were solved by at least one algorithm. As the size of the instances grows, the difficulties of both algorithms to solve a considerable percentage of the instances becomes apparent. On all considered networks, the MD solves at least as many instances as the MDA. On the smaller NY, BAY, and COL networks, the MD solves almost every instance. The average speedup w.r.t. the MDA is more than one order of magnitude. In Figure 7 to Figure 9 it becomes clear that the speedup increases as the number of computed solutions grows. On the FLA, NE and LKS networks the MD solves at most half of the considered instances. The MDA solves even less. The achieved average speedups are always greater than two orders of magnitude. A speedup of three orders of magnitude is common on bigger instances.
MDA solved  MD solved  MDA  MD 


NY  12  19  Min.  12.00  0.02  0.01  1.07  
Avg.  6380.47  150.22  7.87  19.10  
Max.  33536.00  7200.00  6729.78  503.63  
BAY  13  19  Min.  138.00  1.22  0.03  3.35  
Avg.  5378.32  423.43  10.88  38.90  
Max.  15702.00  7200.00  2147.32  671.97  
COL  11  14  Min.  36.00  0.04  0.02  1.77  
Avg.  4497.43  53.43  1.22  43.78  
Max.  36616.00  7200.00  3919.84  10825.18  
FLA  2  10  Min.  1274.00  34.58  1.16  1.83  
Avg.  5754.30  3231.27  45.77  70.60  
Max.  18513.00  7200.00  3925.29  6228.37  
NE  5  8  Min.  266.00  2.39  0.11  2.17  
Avg.  6764.50  184.05  13.22  13.92  
Max.  21757.00  7200.00  3312.57  139.53  
LKS  2  4  Min.  173.00  0.69  0.16  3.93  
Avg.  5536.00  242.53  13.58  17.85  
Max.  16164.00  7200.00  1831.39  93.73 
5 Conclusion
We introduced the Multiobjective Dijkstra (MD), a new algorithm for the OnetoOne Multiobjective Shortest Path Problem (MOSP). The MD is a variant of the Multiobjective Dijkstra Algorithm (MDA) that guides the search towards the target using a monotone heuristic. The underlying idea is similar to the one used in the design of the algorithm for the classical OnetoOne Shortest Path Problem. The monotone heuristic needed for the MD can be computed in the same preprocessing phase that was also applied to the MDA in its original publication. Thus, in this paper we could focus on the comparison of the solution times only. To the best of our knowledge, this was the first time that large scale (e.g., on road networks) computational experiments for like algorithms for MOSP problems with more than objectives were made. Our experiments show that the MD clearly outperforms the MDA. On synthetic NetMaker graphs, the average speedup was around . On road networks instances that were solvable by both algorithms, the average speedup was around an order of magnitude.
References

[1]
(2018)
Outputsensitive complexity of multiobjective combinatorial optimization with an application to the multiobjective shortest path problem
. Ph.D. Thesis, Technische Universität Dortmund, Technische Universität Dortmund, (en). External Links: Document Cited by: §3.4.  [2] (195912) A note on two problems in connexion with graphs. Numerische Mathematik 1 1 (1), pp. 269–271. External Links: Document Cited by: §3.5.
 [3] (2015) An exact method for the biobjective shortest path problem for largescale road networks. European Journal of Operational Research 242 (3), pp. 788–797. External Links: Document, Link Cited by: §1.
 [4] (2005) Multicriteria optimization. SpringerVerlag. External Links: Document Cited by: §3.5, Proposition 17.
 [5] (201006) Multiobjective a* search with consistent heuristics. Journal of the ACM 57 (5), pp. 1–25. External Links: Document Cited by: §1.
 [6] (2005) A new approach to multiobjective a* search. In IJCAI, Cited by: §1.
 [7] (202106) An improved multiobjective shortest path algorithm. Computers & Operations Research, pp. 105424. External Links: Document Cited by: §1, §1, §3.2, §3.4, §3.4, §4.1, §4.2, §4.3.
 [8] (198405) On a multicriteria shortest path problem. European Journal of Operational Research 16 (2), pp. 236–245. External Links: Link Cited by: §1.
 [9] (201411) Multiobjective shortest path problems with lexicographic goalbased preferences. European Journal of Operational Research 239 (1), pp. 89–101. External Links: Document Cited by: §1.
 [10] (200904) A comparison of solution strategies for biobjective shortest path problems. Computers & Operations Research 36 (4), pp. 1299–1331. External Links: Document Cited by: §4.2.
 [11] (201803) Extensions of labeling algorithms for multiobjective uncertain shortest path problems. Networks 72 (1), pp. 84–127. External Links: Document Cited by: §4.2.
 [12] (201907) A biobjective dijkstra algorithm. European Journal of Operational Research 276 (1), pp. 106–118. External Links: Document Cited by: §1, §1, §3.4, §3.4.
 [13] (200005) A label correcting approach for solving bicriterion shortestpath problems. Computers & Operations Research 27 (6), pp. 507–524. External Links: Document Cited by: §4.2.
Comments
There are no comments yet.