Introducing meta reasoning techniques into search is a research direction that has recently proved useful for many search algorithms. All search algorithms have decision points on how to continue search. Traditionally, tailored rules are hard-coded into the algorithms. However, applying meta reasoning techniques based on value of information or other ideas can significantly speed up the search. This was shown for depth-first search in CSPs , for Monte-Carlo tree search , and recently for A* . In this paper we apply meta reasoning techniques to speed up IDA* when several admissible heuristics are available.
IDA*  is a linear-space simulation of A*. Thus it makes sense to examine how such speed-up was done for A* in a similar context, as was done in Lazy A* (or , for short)  by reducing the time spent on computing heuristics. A* is a best-first heuristic search algorithm guided by the cost function . A* uses OPEN and CLOSED lists and always expands the minimal cost node from OPEN, generates its children and moves it to CLOSED. When more than one admissible heuristic is available, one can clearly evaluate all these heuristics, and use their maximum as an admissible heuristic. The problem with naive maximization is that all the heuristics are computed for all the generated nodes, resulting in increased overhead, which can be overcome as follows.
With two (or more) admissible heuristics, when a node is generated, Lazy A* only computes one heuristic, , and adds to Open. Only when re-emerges as the top of Open is another heuristic, , evaluated; and if then is re-inserted into Open. If the goal is reached before node ’s re-emergence, the computation of is never performed, thereby saving time, especially if is computationally heavy. In rational lazy A* () , the ideas of lazy heuristic evaluation and trading additional node expansions for decreased time for computing heuristics were combined. is based on rational meta-reasoning, and uses a myopic regret criterion to decide whether to compute or to bypass the computation of and expand immediately when re-emerges from Open. aims at reduced search time, even at the expense of more node expansions than .
The memory consumption of A* is linear in the number of generated nodes, which is typically exponential in the problem description size, which may be unacceptable. In contrast to A*, IDA* is a linear-space algorithm which emulates A* by performing a series of depth-first searches from the root, each with increasing costs, thus re-expanding nodes multiple times. IDA* is typically used in domains and problem instances where A* requires more than the available memory and thus cannot be run to completion. If the heuristic is admissible (never overestimates the real cost to the goal) then the set of nodes expanded by A* is both necessary and sufficient to find the optimal path to the goal . Similar guarantees holds for IDA* under some additional reasonable assumptions. Thus, techniques used to develop , can in principle be applied to IDA*, the focus of this paper. However, IDA* has a different logical structure and needs a completely different treatment. In particular, in A* one needs to assign an -value to each generated node while in IDA*, one only needs to know whether the -value is below or above the current threshold.
The first thing to consider for IDA* is lazy evaluation of the heuristics. In order to reduce the time spent on heuristic computations, Lazy IDA* evaluates the heuristics one at a time, lazily. When causes a cutoff there is no need to evaluate . Unlike Lazy A*, where lazy evaluation must pay an overhead (re-inserting into the OPEN list) , Lazy IDA* (LIDA*) is straightforward and has no immediate overhead.
The main contribution of this paper is Rational lazy IDA* (RLIDA*) which uses meta reasoning at runtime111This paper is an extended version of a short (2-page) paper to appear in the ECAI 2014 proceedings. In addition to containing all the analysis that could not fit into the short version, there are some additional experimental results and a comparison to additional related work.. We analyze IDA* and provide a criterion, based on a myopic expected regret, which decides whether to evaluate a heuristic or to bypass that evaluation and expand the node right away. We then provide experimental results on sliding tile puzzles and on the container relocation problem , showing that RLIDA* outperforms both IDA* and LIDA*.
2 Lazy IDA*
We begin by describing IDA*, and the minor change needed to make it use the heuristics lazily, thus implementing lazy IDA*.
Throughout this paper we assume for clarity that we have two available admissible heuristics, and . Unless stated otherwise, we assume that is faster to compute than but that is weakly more informed, i.e., for the majority of the nodes , although counter cases where are possible. We say that dominates , if such counter cases do not exist and for all nodes . We use to denote , and to denote . We denote the cost of the optimal solution by . Additionally, we denote the computation time of and of by and , respectively. Unless stated otherwise we assume that is much greater than . We thus mainly aim to reduce the number of times is computed.
2.2 Why use lazy IDA*?
Let be the IDA* threshold. After is evaluated, if , then is pruned and IDA* backtracks to ’s parent. Given both and , a naive implementation of IDA* will evaluate them both and use their maximum in comparing against . Lazy IDA* (LIDA*) is based on the simple fact that when you have an or condition in the form of then if then becomes irrelevant (don’t-care) and need not be computed, as the entire or condition is surely true. In the context of IDA*, if then the search can backtrack without the need to compute
. This simple observation is probably recognized by most implementers of IDA*. Thus, it is likely that LIDA* is a way to implement IDA* when more than one heuristic is present.
The pseudo-code for LIDA* is depicted as Algorithm 1. In lines 13-14 we check whether is already above the threshold in which case, the search backtracks. is only calculated (in lines 15-16) if . The “optional condition” in line 1 is needed for the rational lazy A* algorithm, described below, which entails adding appropriate conditions that aim at only if its usefulness outweights its computational overhead on average. In the standard version of lazy IDA*, the “optional condition” in line 1 is always true, and the respective heuristics are always evaluated at this juncture. We also note that lines 9-10 are needed to ensure that the goal test at lines 11-12 will only return the optimal solution. This check is particulary needed for Rational Lazy IDA* as described below.
2.3 Issues in Lazy IDA*
Several additional obvious improvements to LIDA* are possible. Here we examine some such potential enhancements, as well as possible pitfalls.
2.3.1 Heuristic bypassing
Heuristic bypassing (HBP) is a technique that in many cases allows bypassing the computation of a given heuristic without causing any other change in the course of the algorithm. In A* one needs to compute an -value, while Applied to IDA*, one only needs to know whether the -value is below or above the threshold. First, it is important to note that Lazy IDA* as described above, is a special case of HBP. When there is no need to consult and we bypass the computation of . Another variant of HBP for LIDA* is applicable for a node under the following two preconditions: (1) the operator between and its parent is bidirectional, and (2) both heuristics are consistent . Suppose that node was generated and that is the parent of ; that the cost of the edge is and that . Since was expanded, we know that . Since the heuristics are consistent, we know that . Thus, in such cases, one can skip the computation of and go directly to . Nevertheless, the savings here are negligible as we assumed that and our aim is thus to decrease the number of times is computed. We also note that HBP needs additional effort for book keeping.
When the heuristic is inconsistent then a mechanism called bidirectional pathmax (BPMX) can be used to propagate heuristic values from parents to children and vice versa . Using exhaustive evaluations of all heuristics, even if already exceeded the threshold, can potentially help in propagating larger heuristic values to the neighborhood of . Nevertheless, experiments showed that even in this context, lazy evaluation of heuristics is faster in time than exhaustive evaluation .
2.3.2 Extra iterations of Lazy IDA*
In rare cases, LIDA* can cause extra DFS iterations. Suppose that the current threshold is and the current value of the next threshold (NT) is as some node was seen in the current iteration with . Now we generate node with and thus set and bypass . However, if then consulting would have caused . With LIDA*, we may now start a new and redundant DFS iteration with .
While Lazy A*, was always as informative as A* using the maximum of the heuristics, this is not the case for Lazy IDA*. Nevertheless, since there is potentially an exponential number of nodes in the frontier of a DFS iteration, such scenarios are quite rare and Lazy IDA* outperforms regular IDA* despite this worst-case scenario.
3 Rational Lazy IDA*
A general theory for applying rational meta-reasoning for search algorithms was presented in . Using principles of rational meta-reasoning theoretically every algorithm action (heuristic function evaluation, node expansion, open list operation) should be treated as an action in a sequential decision-making meta-level problem: actions should be chosen so as to achieve the minimal expected search time. However, the appropriate general meta-reasoning problem is extremely hard to define precisely and to solve optimally. In order to apply it practically, specific assumptions and simplifications should be added.
In this paper we focus on just one decision type, made in the context of IDA* - that of deciding whether to evaluate or to bypass the computation of . In order to choose rationally, we define a criterion based on the regret for bypassing in this context. We define regret here as the value lost (in terms of increased run time) due bypassing the computation of , i.e. how much runtime is increased due to bypassing the computation. We wish to compute only if this regret is positive on the average. Some of the ideas behind Rational Lazy IDA* are borrowed from those of  and Rational Lazy A* (RLA*). However, the assumptions of RLA* are different, and cannot be used for IDA* as they were made under the assumption that there exists an OPEN list and that an -value of a node should be stored within the node. In contrast, in IDA* there is no OPEN list and we only need to know whether is below or above the threshold . Therefore IDA* needs a different treatment.
In IDA*, each iteration is a depth-first search up to a gradually increasing threshold , until a solution is found. For each node , we say that evaluating is helpful if . That is, the heuristic helped in the sense that node is pruned, rather than expanded, in this iteration.
The only addition of Rational Lazy IDA* to Lazy IDA* is the option to bypass computations (line 1). In this case, is expanded right away.222It is important to note that in such cases, might be greater than . For this reason we added lines 9-10 in the pseudo code above, to ensure that the solution returned is always optimal. Suppose that we choose to compute — this results in one of the following outcomes:
is not helpful and is immediately expanded.
is helpful (because ), pruning , which is not expanded in the current IDA* iteration.
Observe that computing can be beneficial only in outcome 2 plus the additional condition that the time saved due to pruning a search subtree outweighs the time to compute , i.e., . However, whether outcome 2 takes place after a given state is not known to the algorithm until after is computed. The algorithm must decide whether to evaluate according to what it believes to be the probability of each of the outcomes. The time wasted by being sub-optimal in deciding whether to evaluate is called the regret of the decision. We derive a rational policy for deciding when to evaluate , under the following assumptions:
The decision is made myopically: we work under the belief that the algorithm continues to behave like Lazy IDA* starting with the children of .
is consistent: if evaluating is beneficial on , it is also beneficial on any successor of .
As a first approximation, we also assume that will not cause pruning in any of the children.
If Rational Lazy IDA* is indeed better than Lazy IDA*, the first assumption results in an upper bound on the regret. Note that these meta-reasoning assumptions are made in order to derive decisions, and as is common in research on meta-reasoning, the assumptions do not actually hold in practice . Nevertheless, if the violation of the assumptions is not “too severe”, the resulting algorithms still show significant improvement. Without such assumptions the model becomes far too complicated and one cannot move ahead at all. For example, the myopic assumption trivially does not hold by design, as applying it strictly at runtime means that we only use the rational decision rule at the root, which does not make sense in practice. Violating this assumption results in an actual expected runtime that is lower than that computed under this assumption. The other two simplifying assumptions do not have this nice property as far as we know, however, and one would prefer to drop them. This non-trivial issue remains for future research.
If is not helpful and we decide to compute it, the effort invested in evaluating turns out to be wasted. On the other hand, if is helpful but we decide to bypass it, we needlessly expand . Due to the myopic and other assumptions, Rational Lazy IDA* would evaluate both and for all children of . Due to consistency of , the children of will not be expanded in this IDA* iteration.
Table 1 summarizes the regret of each possible decision, for each possible future outcome; each column in the table represents a decision, while each row represents a future outcome. In the table, is the time to evaluate and expand , and b(n) is the local branching factor at node n (taking into account parent pruning). Computing needlessly “wastes” time. Bypassing computation when would have been helpful “wastes” time, but because computing would have cost , the regret is .
Let us denote the probability that is helpful by . The expected regret of computing is thus . On other hand, the expected regret of bypassing is . As we wish to minimize the expected regret, we should thus evaluate just when:
If (the left side of the equations is negative), then the expected regret is minimized by always evaluating , regardless of the values of , and . A simple decision rule would be to evaluate exactly in these cases.
For , the decision of whether to evaluate depends on the values of , and :
The factor depends on the potentially unknown probability , making it difficult to reach the optimum decision. However, if our goal is just to do better than Lazy IDA*, then it is safe to replace by an upper bound on . We discuss this next.
3.1 Bounding the probability that is helpful
Search time can be saved by evaluating selectively, only in the nodes where the probability that the evaluation is helpful is “high enough”. In particular, in the case of two heuristics, and , the decision whether to evaluate can be made based on and prior history of evaluations of and on the same or “similar” nodes. One can try to estimate , either online or offline in order to use the decision boundaries such as Equation 3 based on these empirical frequencies directly.
Nevertheless, we examine another possibility here, based on the rationale that our goal in RLIDA* is to do better than simple LIDA*, and wish to trade off computation times “safely”, i.e. with little risk of being worse than LIDA*. One way to estimate the probability that the evaluation is helpful “safely” is to bound this probability using concentration inequalities.
Concentration inequalities bound probabilities of certain events for a bound random variable, that is, such a variablefor which , and we need to construct such a variable. Let be:
It is easy to see that and increases with . The condition (i.e., is helpful) is equivalent to condition where:
We need to bound the probability that given the prior history of evaluations of (that is, of and ). Denote by the average of samples:
The probability is less than the probability that the mean of the random variable is at least , plus the probability that given (the union bound).
Denote — we will obtain the bound as a function of and then select that minimizes the bound. According to the Hoeffding inequality:
and to the Markov inequality:
An upper bound for the probability is a function of :
The bound can be minimized for by solving , but a closed-form solution does not generally exist. However, a reasonable value for can be easily found. Choosing
and substituting into (10), obtain
In the bound (12) the second term is tantamount to the Markov inequality when the sample average coincides with the mean . The first term does not depend on and for approaches zero as approaches infinity. Although the concentration inequalities are correct for iid samples, a state that does not necessarily hold for samples of heuristic values during the search, nevertheless it is a usable first-order approximation. We use as defined in Equation 12 as an estimate of .
4 Empirical evaluation
The greatest advantage of IDA* over A* is storage complexity. However, IDA* has a number of limitations. First, the number of nodes expanded by IDA* is typically much greater than that of A* because IDA* is unable to detect transpositions and because in every iteration, IDA* repeats the former iterations. In addition, IDA* preforms very poorly if there is a large number of different -costs below encountered during the search (leading to a large number of iterations), which occurs in domains such as TSP.
Therefore we selected for empirical evaluation domains that are known to be IDA*-friendly (such as the 15-puzzle), or where recent work has shown IDA* to perform well, such as the container relocation problem . Regretfully, most planning problems (from the planning competitions) used in , are inappropriate for IDA* due to multiple transpositions in the search space. Another requirement we had is the availability of known informative admissible heuristics for the domain (otherwise it does not pay to compute them), that are costly to compute (if they are very cheap, we might as well always compute them). In domains where the latter requirements do not hold, elaborate meta-reasoning on whether to compute a heuristic will thus obviously not achieve any significant improvement.
The above restrictions are obvious limitations to the applicability of the scheme proposed in this paper, and should be considered when trying to apply our methods. Nevertheless, as stated in Section 5.1, our scheme should be extensible to other IDA*-like algorithms where the large number of f-costs is not a problem.
4.1 Sliding tile puzzles
We first provide evaluations on the 15-puzzle and its weighted variant, where the cost of moving each tile is equal to the number on the tile. Note that there is another version of the weighted 15-puzzle where the cost of a tile move is the reciprocal of the number on the tile . However, the number of possible f-costs under f* in this version is typically very large, thus in this reciprocal variant, IDA* is expected to perform abysmally, making IDA* inapplicable to this domain. Indeed, some preliminary runs confirmed this expectation, and we therefore dropped this reciprocal weights version from our evaluation.
For consistency of comparison, we used as test cases for the 15 puzzle 98 out of Korf’s 100 tests : all the tests that were solved in less than 20 minutes with standard IDA* using the Manhattan Distance (MD) heursitic. (All experiments were performed using Java, on a 3.3GHz AMD Phenom II X6 1100T Processor, with 64 bit Ubuntu 12.04, and with sufficient memory to avoid paging.) As the more informative heuristic we used the linear-conflict heuristic (LC)  which adds a value of 2 to MD for pairs of tiles that are in the same row (or the same column) as their respective goals but in a reversed order. One of these tiles will need to move away from the row (or column) to let the other pass.
Since the runtime of both heuristics is nearly constant across the states, (i.e., and for some constants ) it turns out that the decision of whether to compute is stable across a wide range of values, and thus a constant value of performs well for this domain. Results are presented for an assumed constant , estimated offline from trial runs of RLIDA* on a few problem instances. Average results for IDA* with only MD, IDA* with LC, Lazy IDA* using both heuristics, and Rational Lazy IDA*, are shown in Table 2. The advantage of Rational Lazy IDA* is evident: even though it expands many more nodes than Lazy IDA*, its runtime is significantly lower as it saves even more time on evaluations of LC. LIDA* evaluated LC 21,886,093 times, out of which only 6,561,972 were helpful. Much time was wasted on evaluating non-helpful heuristics. In contrast, RLIDA* only chose to evaluate LC 8,106,832 times, out of which 4,413,050 were helpful. The bottom Clairvoyant row is an unrealizable scheme that uses an oracle, not achivable in practice, which has a runtime better than any achievable optimal decision on whether to evaluate . Its numbers were estimated by using the LIDA* results, assuming that was computed only in the 6,561,972 helpful nodes, and bypassed otherwise. As can be seen, the runtime of our version of RLIDA* is closer to Clairvoyant than to LIDA*. It shows that much of the potential of RLIDA* was indeed exploited by our version.
Table 3 shows similar results for 82 of the previous initial positions on weighted 15 puzzle that were solved in 20 minutes by IDA* (the weighted 15 puzzle is harder). In this domain, Rational Lazy A* also achieves a significant speedup and was much closer to Clairvoyant than to LIDA*.
For the heuristics we used in our tests and for , it turns out that the decision on whether to evaluate depends just on the branching factor: evaluate only for (excluding the parent), i.e. for cases where the blank was in the middle. Applying the bounds from Section 3.1 to estimate did not achieve significant further improvement over RLIDA* with a constant (not shown in the tables), due to the fact that the simple decision rule was rather stable across a relatively wide range of . We thus expect this same rule to work for sliding tile puzzles of other dimensions, and tried the same scheme in rectangular tile puzzles: 3*5 (numbers from 1 to 14) and 3*6 (numbers from 1 to 17). Since the fraction of nodes with 3 children in these puzzles is lower than the 4*4 puzzle, we expect RLIDA* to do better than in the 4*4 puzzle. As we did not have access to standard benchmark instances, we generated instances using random walks of 45 to 80 steps from the goal state.
Tables 4, 5 show that the improvement factor in both domains due to rational lazy IDA* is similar to that obtained in the (4*4) 15 puzzle. However the gap between RLIDA* and the unrealizable clairvoyant scheme is smaller than for the 4*4 puzzle, so RLIDA* seems to be making better decisions in these latter variants, as expected. Though indicative, one caveat is that the way instances were generated in the rectangular versions is different from the 4*4 puzzle, and the general shape of the search space may also differ.
4.2 Container relocation problem
The container relocation problem is an abstraction of a planning problem encountered in retrieving stacked containers for loading onto a ship in sea-ports . We are given stacks of containers, where each stack consists of up to containers. In each stack, containers are stacked on top of one another. In the initial state there are containers, arbitrarily numbered from 1 to . The rules of stacking and of moving containers is the same as for blocks in the well-known blocks world domain, i.e., a container can be moved if there is no container on top of it. However, unlike blocks-world planning, the objective function is different, as follows.
The goal is to retrieve all containers in order of number, from 1 to , where “retrieve” can be seen as placing a container on an additional, special and always empty, stack where the container disappears (in the application domain this “special stack” is actually a freight truck that takes the container away to be loaded onto a ship). The objective function to minimize is the number of container moves until all containers are gone (“loaded onto the truck”). The complication comes from the fact that we can only “retrieve” a container if it is at the top of one of the stacks. Thus, containers on top of it should be moved away. Optimally solving this problem is NP-hard .
Although there are various variants of this problem, we assume here the version where each container (“block” in blocks-world terminology) is uniquely numbered. Another assumption typically made is that a stack that currently has containers is “full” and no additional containers can be placed on until some container is moved away from . We also address only the “restricted” version of the problem , where the only relocations allowed are of containers currently on top of the smallest numbered container. Finally, since a solution always involves removing all containers, and each container can be moved to the truck only once, it is customary to count only moves from stack to stack (called “relocations”), ignoring the final move of containers to the truck.
The heuristics we used for the experiments are as follows. Every container numbered which is above at least one container with with a number smaller than must be moved from its stack in order to allow to be retrieved. The number of such containers in a state can be computed quickly, and forms an admissible heuristic denoted in . A more complicated heuristic adds one relocation for each container that must be relocated a second time as any place to which it is moved will block some other container. Following , we denote this heuristic by 333To guarantee admissibility we made some minor notation changes from how this heuristic is formally stated in the original paper. This heuristic requires much more computation time than , and additionally its runtime depends heavily on the state.
In the experiments, we used as instances the 49 hardest tests out of those that were solved in less than 20 minutes with the heuristic, from the CVS test suite described in [1, 7], retrived from http://iwi.econ.uni-hamburg.de/IWIWeb/Default.aspx?tabId=1083&tabindex=4. The instances actually used had either 5 or 6 stacks, and from 6 to 10 tiers. Results are shown in table 6. In this domain Rational Lazy A* shows some performance improvement even when was assumed constant (). However, in this problem the branching factor is almost constant, and equal to the number of stacks minus 1, during much of the search. As a result, there is room for improvement by better estimating . Indeed using the bounds developed in Section 3.1 to estimate dynamically achieves significant additional speedup, as shown by the line RLIDA*,
. Due to the fact that the runtimes of the heuristics have a large variance and are hard to predict precisely, using Eq.3 did not achieve good results, so the results reported in the table are actually for the simplified decision rule that computes only when , as mentioned after Equation 2.
5.1 Related work
Other elaborate schemes for deciding on heuristics at runtime appear in the research literature. Domshlak st al.  also noted that although theoretically taking the maximum of admissible heuristics is best within the context of A*, the overhead may not be worth it. Instead, their idea is to select which heuristic to compute at runtime. Based on this idea, they formulated selective max (Sel-MAX) for A*, an online learning scheme which chooses one heuristic to compute at each state. In principle, Sel-MAX could be adapted to run in IDA*. However, the domains we used in experiments had a heuristic which has negligible computation time, and should thus always be computed. Sel-Max is aimed at cases where there is a need for selection, i.e., if the time for computing each heuristic is not negligible.
Automatically selecting combinations of heuristics for A* and IDA* from a large set of available heuristics was examined in . Selecting a combination of heuristics is in some sense orthogonal to the work presented in this paper, as once such a selection is made, one might still further optimize the actual scheme for computing the selected heuristics. The heuristics can be evaluated lazily, and rationally omitting some of them conditional on the results of previously computed heuristics in the same node can also be done. Generalizing both methods, one could try to optimize a policy for computing heuristics at the nodes, rather than just find the best combination, but how to do so is non-trivial. That is because the number of policies is at least doubly exponential in the number of heuristics under consideration, whearas the number of combinations is “only” exponential in the number of heuristics.
A related line of research of performing meta reasoning for IDA*-like algorithms is on choosing the threshold for the next iteration. In basic IDA*, the next threshold is strictly defined as the smallest value among nodes that were pruned. Learning and decision making techniques are applied to choose a different threshold such that time is saved but optimality of the algorithm is still maintained [14, 12, 18]. This issue is orthogonal to the problem addressed in this paper. In fact, our method for trading off time spent on computing heuristics with time spent on expanding additional nodes should be extensible to other IDA*-like algorithms. As in some of these algorithms the f-limit is not the next f-cost, such an extension should overcome one of the major stumbling blocks to further applicability of our method stated in Section 4.
5.2 Summary and future work
Rational Lazy IDA* and its analysis can be seen as an instance of the rational meta-reasoning framework . While this framework is very general, it is extremely hard to apply in practice. Recent work exists on meta-reasoning in DFS algorithms for CSP)  and in Monte-Carlo tree search . This paper applies these methods successfully to a variant of IDA*.
We discussed two schemes for decreasing the time spent on computing heuristics during search. Lazy IDA* is very simple and a natural implementation of IDA* in the presence of 2 or more heuristics, especially if one is dominant but more costly. Rational Lazy IDA* allows additional cuts in the number of computations, at the expense of being less informed and thereby generating more nodes. However, due to a rational tradeoff, this allows for an additional speedup, and Rational Lazy IDA* achieves the best overall performance in our domains.
Experimental results on several domains show the advantage of RLIDA*. The non-realizable clairvoyant scheme discussed in Section 4 serves as a bound of the potential gain from RLIDA*. We note that the most important term in some of the domains is , the probability that will indeed cause a cutoff. In this paper we provided a rudimentary method to bound based on previous samples. Future work might find better ways to estimate , hopefully getting closer to the clairvoyant ideal. One such direction can be to use any of the newly introduced type-systems, e.g., those that measure the correlation of a given heuristic between neighbors [19, 11].
Another direction is to relax some of the meta-reasoning assumptions, especially those frequently violated in practice, and develop appropriate decision rules. In particular, consider the assumption that does not prune any of the children. Preliminary runs on the tile puzzles showed that this assumption is violated in about 40% of the nodes, which seems to be a significant violation. Despite this violation, RLIDA* achieved most of the potential gain, so even though relaxing this assumption may further improve the runtime, the extra effort (and possible runtime overhead) may not be worth it. However, for the container relocation problem, this assumption was violated in about 60% of the nodes and there is also a considerable gap between RLIDA* and clairvoyant, so for this domain relaxing the assumption may be worth the effort.
Although the techniques used in this paper may be applicable to other IDA*-like algorithms (e.g., RBFS, or DFBnB) the assumptions used in this paper are rather delicate, necessitating a different set of assumptions and thus different resulting meta-level decision schemes for such algorithm, another interesting item for future work.
-  Marco Caserta, Stefan Vo, and Moshe Sniedovich, ‘Applying the corridor method to a blocks relocation problem’, OR Spectr., 33(4), 915–929, (October 2011).
-  R. Dechter and J. Pearl, ‘Generalized best-first search strategies and the optimality of A*’, Journal of the ACM, 32(3), 505–536, (1985).
-  Carmel Domshlak, Erez Karpas, and Shaul Markovitch, ‘Online speedup learning for optimal planning’, JAIR, 44, 709–755, (2012).
-  A. Felner, U. Zahavi, R. Holte, J. Schaeffer, N. Sturtevant, and Z. Zhang, ‘Inconsistent heuristics in theory and practice’, Artificial Intelligence, 175(9-10), 1570–1603, (2011).
-  Santiago Franco, Michael W. Barley, and Patricia J. Riddle, ‘A new efficient in situ sampling model for heuristic selection in optimal search’, in Australasian Conference on Artificial Intelligence, eds., Stephen Cranefield and Abhaya C. Nayak, volume 8272 of Lecture Notes in Computer Science, pp. 178–189. Springer, (2013).
-  Nicholas Hay, Stuart Russell, David Tolpin, and Solomon Eyal Shimony, ‘Selecting computations: Theory and applications’, in UAI, eds., Nando de Freitas and Kevin P. Murphy, pp. 346–355. AUAI Press, (2012).
-  Bo Jin, Andrew Lim, and Wenbin Zhu, ‘A greedy look-ahead heuristic for the container relocation problem’, in IEA/AIE, eds., Moonis Ali, Tibor Bosse, Koen V. Hindriks, Mark Hoogendoorn, Catholijn M. Jonker, and Jan Treur, volume 7906 of Lecture Notes in Computer Science, pp. 181–190. Springer, (2013).
-  R. E. Korf, ‘Depth-first iterative-deepening: An optimal admissible tree search’, Artificial Intelligence, 27(1), 97–109, (1985).
-  Richard E. Korf, Michael Reid, and Stefan Edelkamp, ‘Time complexity of iterative-deepening-A’, Artif. Intell., 129(1-2), 199–218, (2001).
-  Richard E. Korf and Larry A. Taylor, ‘Finding optimal solutions to the twenty-four puzzle’, in AAAI, pp. 1202–1207, (1996).
-  Levi H. S. Lelis, Sandra Zilles, and Robert C. Holte, ‘Predicting the size of IDA*’s search tree’, Artif. Intell., 196, 53–76, (2013).
-  Alexander Reinefeld and Tony A. Marsland, ‘Enhanced iterative-deepening search’, IEEE Trans. Pattern Anal. Mach. Intell., 16(7), 701–710, (July 1994).
-  Stuart Russell and Eric Wefald, ‘Principles of metereasoning’, Artificial Intelligence, 49, 361–395, (1991).
-  Uttam K. Sarkar, Partha P. Chakrabarti, Sujoy Ghose, and S. C. De Sarkar, ‘Reducing reexpansions in iterative-deepening search by controlling cutoff bounds’, Artif. Intell., 50(2), 207–221, (1991).
-  Jordan T. Thayer and Wheeler Ruml, ‘Bounded suboptimal search: A direct approach using inadmissible estimates’, in Proceedings of the Twenty-second International Joint Conference on Artificial Intelligence (IJCAI-11), (2011).
-  D. Tolpin, T. Beja, S. E. Shimony, A. Felner, and E. Karpas, ‘Toward rational deployment of multiple heuristics in a’, in IJCAI, (2013).
-  David Tolpin and Solomon Eyal Shimony, ‘Rational deployment of CSP heuristics’, in IJCAI, ed., Toby Walsh, pp. 680–686. IJCAI/AAAI, (2011).
-  Benjamin W. Wah and Yi Shang, ‘A comparative study of ida*-style searches’, in ICTAI, pp. 290–296, (1994).
-  Uzi Zahavi, Ariel Felner, Neil Burch, and Robert C. Holte, ‘Predicting the performance of IDA* using conditional distributions’, J. Artif. Intell. Res. (JAIR), 37, 41–83, (2010).
-  Huidong Zhang, Songshan Guo, Wenbin Zhu, Andrew Lim, and Brenda Cheang, ‘An investigation of IDA* algorithms for the container relocation problem’, in Proceedings of the 23rd International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems - Volume Part I, IEA/AIE’10, pp. 31–40, Berlin, Heidelberg, (2010). Springer-Verlag.