The algorithm [Hart et al.1968] is a best-first heuristic search algorithm guided by the cost function . If the heuristic is admissible (never overestimates the real cost to the goal) then the set of nodes expanded by is both necessary and sufficient to find the optimal path to the goal [Dechter and Pearl1985].
This paper examines the case where we have several available admissible heuristics. Clearly, we can evaluate all these heuristics, and use their maximum as an admissible heuristic, a scheme we call . The problem with naive maximization is that all the heuristics are computed for all the generated nodes. In order to reduce the time spent on heuristic computations, Lazy (or , for short) evaluates the heuristics one at a time, lazily. When a node is generated, only computes one heuristic, , and adds to Open. Only when re-emerges as the top of Open is another heuristic, , evaluated; if this results in an increased heuristic estimate, is re-inserted into Open. This idea was briefly mentioned by zhang-bacchus:aaai-2012 (zhang-bacchus:aaai-2012) in the context of the MAXSAT heuristic for planning domains. is as informative as , but can significantly reduce search time, as we will not need to compute for many nodes. In this paper we provide a deeper examination of , and characterize the savings that it can lead to. In addition, we describe several technical optmizations for .
reduces the search time, while maintaining the informativeness of . However, as noted by domshlak-et-al:jair-2012 (domshlak-et-al:jair-2012), if the goal is to reduce search time, it may be better to compute a fast heuristic on several nodes, rather than to compute a slow but informative heuristic on only one node. Based on this idea, they formulated selective max (Sel-MAX), an online learning scheme which chooses one heuristic to compute at each state. Sel-MAX chooses to compute the more expensive heuristic for node
when its classifier predicts thatis greater than some threshold, which is a function of heuristic computation times and the average branching factor. INCJUR (INCJUR) showed that randomizing a heuristic and applying bidirectional pathmax (BPMX) might sometimes be faster than evaluating all heuristics and taking the maximum. This technique is only useful in undirected graphs, and is therefore not applicable to some of the domains in this paper. Both Sel-MAX and Random compute the resulting heuristic once, before each node is added to Open while computes the heuristic lazily, in different steps of the search. In addition, both randomization and Sel-MAX save heuristic computations and thus reduce search time in many cases. However, they might be less informed than pure maximization and as a result expand a larger number of nodes.
We then combine the ideas of lazy heuristic evaluation and of trading off more node expansions for less heuristic computation time, into a new variant of called rational lazy (). is based on rational meta-reasoning, and uses a myopic value-of-information criterion to decide whether to compute or to bypass the computation of and expand immediately when re-emerges from Open. aims to reduce search time, even at the expense of more node expansions than .
Empirical results on variants of the 15-puzzle and on numerous planning domains demonstrate that and lead to state-of-the-art performance in many cases.
Throughout this paper we assume for clarity that we have two available admissible heuristics, and . Extension to multiple heuristics is straightforward, at least for . Unless stated otherwise, we assume that is faster to compute than but that is weakly more informed, i.e., for the majority of the nodes , although counter cases where are possible. We say that dominates , if such counter cases do not exist and for all nodes . We use to denote . Likewise, denotes , and denotes . We denote the cost of the optimal solution by . Additionally, we denote the computation time of and of by and , respectively and denote the overhead of an insert/pop operation in Open by . Unless stated otherwise we assume that is much greater than . thus mainly aims to reduce computations of .
The pseudo-code for is depicted as Algorithm 1, and is very similar to . In fact, without lines 7 – 10, would be identical to using the heuristic. When a node is generated we only compute and is added to Open (Lines 11 – 13), without computing yet. When is first removed from Open (Lines 7 – 10), we compute and reinsert it into Open, this time with .
It is easy to see that is as informative as , in the sense that both and expand a node only if is the best -value in Open. Therefore, and generate and expand and the same set of nodes, up to differences caused by tie-breaking.
In its general form generates many nodes that it does not expand. These nodes, called surplus nodes [Felner et al.2012], are in Open when we expand the goal node with . All nodes in Open with are surely surplus but some nodes with may also be surplus. The number of surplus nodes in OPEN can grow exponentially in the size of the domain, resulting in significant costs.
avoids computations for many of these surplus nodes. Consider a node that is generated with . This node is inserted into Open but will never reach the top of Open, as the goal node will be found with . In fact, if Open breaks ties in favor of small -values, the goal node with will be expanded as soon as it is generated and such savings of will be obtained for some nodes with too. We refer to such nodes where we saved the computation of as good nodes. Other nodes, those with (and some with ) are called regular nodes as we apply both heuristics to them.
computes both and for all generated nodes, spending time
on all generated nodes. By contrast, for good nodes
only spends , and saves . In the basic implementation of
(as in algorithm 1) regular nodes are inserted into
OPEN twice, first for (Line 13) and then for (Line 9) while good nodes only enter Open once (Line 13). Thus, has some extra overhead
of Open operations for regular nodes. We distinguish between
3 classes of nodes:
(1) expanded regular (ER) — nodes that were expanded after both heuristics were computed.
(2) surplus regular (SR) — nodes for which was computed but are still in Open when the goal was found.
(3) surplus good (SG) — nodes for which only was computed by when the goal was found.
3 Enhancements to Lazy
Several enhancements can improve basic (Algorithm 1), which are effective especially if and are not negligible.
3.1 Open bypassing
Suppose node was just generated, and let denote the best -value currently in Open. evaluates and then inserts into Open. However, if , then will immediately reach the top of Open and will be computed. In such cases we can choose to compute right away (after Line 12 in Algorithm 1), thus saving the overhead of inserting into Open and popping it again at the next step (). For such nodes, is identical to , as both heuristics are computed before the node is added to Open. This enhancement is called OPEN bypassing (OB). It is a reminiscent of the immediate expand technique applied to generated nodes [Stern et al.2010, Sun et al.2009]. The same technique can be applied when again reaches the top of Open when evaluating ; if , expand right away. With OB, will incur the extra overhead of two Open cycles only for nodes where and then later .
3.2 Heuristic bypassing
Heuristic bypassing (HBP) is a technique that allows
to omit evaluating one of the two heuristics. HBP is probably used by many implementers, although to the best of our knowledge, it never appeared in the literature. HBP works for a nodeunder the following two preconditions: (1) the operator between and its parent is bidirectional, and (2) both heuristics are consistent [Felner et al.2011].
Let be the cost of the operator. Since the heuristic is consistent we know that . Therefore, provides the following upper- and lower-bounds on of . We thus denote and .
To exploit HBP in , we simply skip the computation of if , and vice versa. For example, consider node in Figure 1, where all operators cost 1, , and . Based on our bounds and . Thus, there is no need to check as will surely be the maximum. We can propagate these bounds further to node . while and again there is no need to evaluate . Only in the last node we get that but since then can potentially return the maximum and should thus be evaluated.
HBP can be combined in in a number of ways. We describe the variant we used. aims to avoid needless computations of . Thus, when , we delay the computation of and add to Open with and continue as in . In this case, we saved , delayed and used which is more informative than . If, however, , then we compute and continue regularly. We note that HBP incurs the time and memory overheads of computing and storing four bounds and should only be applied if there is enough memory and if and especially are very large.
4 Rational Lazy
offers us a very strong guarantee, of expanding the same set of nodes as . However, often we would prefer to expand more states, if it means reducing search time. We now present Rational Lazy A* (), an algorithm which attempts to optimally manage this tradeoff.
Using principles of rational meta-reasoning [Russell and Wefald1991], theoretically every algorithm action (heuristic function evaluation, node expansion, open list operation) should be treated as an action in a sequential decision-making meta-level problem: actions should be chosen so as to achieve the minimal expected search time. However, the appropriate general meta-reasoning problem is extremely hard to define precisely and to solve optimally.
Therefore, we focus on just one decision type, made in the context of , when re-emerges from Open (Line 7). We have two options: (1) Evaluate the second heuristic and add the node back to Open (Lines 7-10) like , or (2) bypass the computation of and expand right way (Lines 11 -13), thereby saving time by not computing , at the risk of additional expansions and evaluations of . In order to choose rationally, we define a criterion based on value of information (VOI) of evaluating in this context.
The only addition of to is the option to bypass
computations (Lines 7-10).
Suppose that we choose to compute — this results in one of the
1: is still expanded, either now or eventually.
2: is re-inserted into Open, and the goal is found without ever expanding .
Computing is helpful only in outcome 2, where potential time savings are due to pruning a search subtree at the expense of the time . However, whether outcome 2 takes place after a given state is not known to the algorithm until the goal is found, and the algorithm must decide whether to evaluate according to what it believes to be the probability of each of the outcomes. We derive a rational policy for when to evaluate , under the myopic assumption that the algorithm continues to behave like afterwards (i.e., it will never again consider bypassing the computation of ).
The time wasted by being sub-optimal in deciding whether to evaluate is called the regret of the decision. If is not helpful and we decide to compute it, the effort for evaluating turns out to be wasted. On the other hand, if is helpful but we decide to bypass it, we needlessly expand . Due to the myopic assumption, would evaluate both and for all successors of .
Table 2 summarizes the regret of each possible decision, for each possible future outcome; each column in the table represents a decision, while each row represents a future outcome. In the table, is the to time compute and re-insert into Open thus delaying the expansion of , is the time to remove from Open, expand , evaluate on each of the (“local branching factor”) children of , and insert into the open list. Computing needlessly wastes time . Bypassing computation when would have been helpful wastes time, but because computing would have cost , the regret is .
Let us denote the probability that is helpful by . The expected regret of computing is thus . On the other hand, the expected regret of bypassing is . As we wish to minimize the expected regret, we should thus evaluate just when:
If , then the expected regret is minimized by always evaluating , regardless of the values of and . In these cases, cannot be expected to do better than . For example, in the 15-puzzle and its variants, the effective branching factor is . Therefore, if is expected to be helpful for more than half of the nodes on which evaluates , then one should simply use .
|(Using Eq. 6)|
For , the decision of whether to evaluate depends on the values of and :
Denote by the time to generate the children of . Then:
The factor depends on the potentially unknown probability , making it difficult to reach the optimum decision. However, if our goal is just to do better than , then it is safe to replace by an upper bound on . Note that the values may actually be variables that depend in complicated ways on the state of the search. Despite that, the very crude model we use, assuming that they are setting-specific constants, is sufficient to achieve improved performance, as shown in Section 5.
We now turn to implementation-specific estimation of the runtimes. Open in is frequently implemented as a priority queue, and thus we have, approximately, for some , where the size of Open is . Evaluating is cheap in many domains, as is the case with Manhattan Distance (MD) in discrete domains, is the most significant part of . In such cases, rule (5) can be approximated as 6:
Rule (6) recommends to evaluate mostly at late stages of the search, when the open list is large, and in nodes with a higher branching factor.
The right hand side of (7) grows with , and here it is beneficial to evaluate only for nodes with a sufficiently large branching factor.
5 Empirical evaluation
We now present our empirical evaluation of and , on variants of the 15-puzzle and on planning domains.
5.1 Weighted 15 puzzle
We first provide evaluations on the weighted 15-puzzle variant [Thayer and Ruml2011], where the cost of moving each tile is equal to the number on the tile. We used a subset of 36 problem instances (out of the 100 instances of BFID85 (BFID85)) which could be solved with 2Gb of RAM and 15 minutes timeout using the Weighted Manhattan heuristic (WMD) for . As the expensive and informative heuristic we use a heuristic based on lookaheads [Stern et al.2010]. Given a bound we applied a bounded depth-first search from a node and backtracked when we reached leaf nodes for which . -values from leaves were propagated to .
Table 3 presents the results averaged on all instances solved. The runtimes are reported relative to the time of with WMD (with no lookahead), which generated 1,886,397 nodes (not reported in the table). The first 3 columns of Table 3 show the results for with the lookahead heuristic for different lookahead depths. The best time is achieved for lookahead 6 (0.588 compared to with WMD). The fact that the time does not continue to decrease with deeper lookaheads is clearly due to the fact that although the resulting heuristic improves as a function of lookahead depth (expanding and generating fewer nodes), the increasing overheads of computing the heuristic eventually outweights savings due to fewer expansions.
The next 4 columns show the results for with WMD as , lookahead as , for different lookahead depths. The Good1 column presents the number of nodes where saved the computation of while the column presents the number of nodes where was computed. Roughly of nodes were Good1 and since was the most dominant time cost, most of this saving is reflected in the timing results. The best results are achieved for lookahead 8, with a runtime of 0.527 compared to with WMD.
The final columns show the results of , with the values of calibrated for each lookahead depth using a small subset of problem instances. The Good2 column counts the number of times that decided to bypass the computation. Observe that outperforms , which in turn outperforms , for most lookahead depths. The lowest time with (0.371 of with WMD) was obtained for lookahead 10. That is achieved as the more expensive heuristic is computed less often, reducing its effective computational overhead, with some adverse effect in the number of expanded nodes. Although expanded fewer nodes, performed much fewer computations as can be seen in the table, resulting in decreased overall runtimes.
5.2 Planning domains
|Problems Solved||Planning Time (seconds)||GOOD|
We implemented and on top of the Fast Downward planning system [Helmert2006], and experimented with two state of the art heuristics: the admissible landmarks heuristic (used as ) [Karpas and Domshlak2009], and the landmark cut heuristic [Helmert and Domshlak2009] (used as ). On average, computation is 8.36 times more expensive than computation. We did not implement HBP in the planning domains as the heuristics we use are not consistent and in general the operators are not invertible. We also did not implement OB, as the cost of Open operations in planning is trivial compared to the cost of heuristic evaluations.
We experimented with all planning domains without conditional effects and derived predicates (which the heuristics we used do not support) from previous IPCs. We compare the performance of and to that of using each of the heuristics individually, as well as to their max-based combination, and their combination using selective-max (Sel-MAX) [Domshlak et al.2012]. The search was limited to 6GB memory, and 5 minutes of CPU time on a single core of an Intel E8400 CPU with 64-bit Linux OS.
When applying in planning domains we evaluate rule (7) at every state. This rule involves two unknown quantities: , the ratio between heuristic computations times, and , the probability that is helpful. Estimating is quite easy — we simply use the average computation times of both heuristics, which we measure as search progresses.
Estimating is not as simple. While it is possible to empirically determine the best value for , as done for the weighted 15 puzzle, this does not fit the paradigm of domain-independent planning. Furthermore, planning domains are very different from each other, and even problem instances in the same domain are of varying size, and thus getting a single value for which works well for many problems is difficult. Instead, we vary our estimate of adaptively during search. To understand this estimate, first note that if is a node at which was helpful, then we computed for , but did not expand . Thus, we can use the number of states for which we computed that were not yet expanded (denoted by ), divided by the number of states for which we computed (denoted by ), as an approximation of . However, is not likely to be a stable estimate at the beginning of the search, as and are both small numbers. To overcome this problem, we “imagine” we have observed examples, which give us an estimate of , and use a weighted average between these examples, and the observed examples — that is, we estimate by . In our empirical evaluation, we used and .
Table 4 depicts the experimental results. The leftmost part of the table shows the number of solved problems in each domain. As the table demonstrates, solves the most problems, and solves the same number of problems as Sel-MAX. Thus, both and are state-of-the-art in cost-optimal planning. Looking more closely at the results, note that Sel-MAX solves 10 more problems than and in the freecell domain. Freecell is one of only three domains in which is more informed than (the other two are nomystery-opt11 and visitall-opt11), violating the basic assumptions behind that is more informed than . If we ignore these domains, both and solve more problems than Sel-MAX.
The middle part of the Table 4
shows the geometric mean of planning time in each domain, over the commonly solved problems (i.e., those that were solved by all 6 methods).is the fastest overall, with second. It is important to note that both and are very robust, and even in cases where they are not the best they are never too far from the best. For example, consider the miconic domain. Here, is very informative and thus the variant that only computed is the best choice (but a bad choice overall). Observe that both and saved 86% of computations, and were very close to the best algorithm in this extreme case. In contrast, the other algorithms that consider both heuristics (max and Sel-MAX) performed very poorly here (more than three times slower).
The rightmost part of Table 4 shows the average fraction of nodes for which and did not evaluate the more expensive heuristic, , over the problems solved by both these methods. This is shown in the good columns. Our first observation is that this fraction varies between different domains, indicating why works well in some domains, but not in others. Additionally, we can see that in domains where there is a difference in this number between and , usually performs better in terms of time. This indicates that when decides to skip the computation of the expensive heuristic, it is usually the right decision.
Finally, Table 5 shows the total number of expanded and generated states over all commonly solved problems. is indeed as informative as (the small difference is caused by tie-breaking), while is a little less informed and expands slightly more nodes. However, is much more informative than its “intelligent” competitor - Sel-MAX, as these are the only two algorithms in our set which selectively omit some heuristic computations. generated almost half of the nodes compared to Sel-MAX, suggesting that its decisions are better.
5.3 Limitations of : 15 puzzle example
Some domains and heuristic settings will not achieve time speedup with L. An example is the regular, unweighed 15-puzzle. Results for and with and without HBP on the 15-puzzle are reported in Table 6. () count the number of nodes where HBP pruned the need to compute (resp. ). OB is the number of nodes where OB was helpful. Bad is the number of nodes that went through two Open cycles. Finally, Good is the number of nodes where computation of was saved due to .
In the first experiment, Manhattan distance (MD) was divided into two heuristics: and used as and . Results are averaged over 100 random instances with average solution depth of 26.66. As seen from the first two lines, HBP when applied on top of saved about 36% of the heuristic evaluations. Next are results for and +HBP. Many nodes are pruned by HBP, or OB. The number of good nodes dropped from 28% (Line 3) to as little as 11% when HBP was applied. Timing results (in ms) show that all variants performed equally. The reason is that the time overhead of the and heuristics is very small so the saving on these 28% or 11% of nodes was not significant to outweigh the HBP overhead of handling the upper and lower bounds.
The next experiment is with MD as and a variant of the additive 7-8 PDBs [Korf and Felner2002], as . Here we can observe an interesting phenomenon. For , most nodes were caught by either HBP (when applicable) or by OB. Only 4% of the nodes were good nodes. The reason is that the 7-8 PDB heuristic always dominates MD and is always the maximum among the two. Thus, 7-8 PDB was needed at early stages (e.g. by OB) and MD itself almost never caused nodes to be added to Open and remain there until the goal was found.
These results indicate that on such domains, has limited merit. Due to uniform operator cost and the heuristics being consistent and simple to compute, very little space is left for improvement with good nodes. We thus conclude that is likely to be effective when there is significant difference between and , and/or operators that are not bidirectional and/or with non-uniform costs, allowing for more good nodes and significant time saving.
|, , Depth = 26.66|
|Manhattan distance, 7-8 PDB, Depth 52.52|
We discussed two schemes for decreasing heuristic evaluation times. is very simple to implement and is as informative as . can significantly speed up the search, especially if dominates the other time costs, as seen in weighted 15 puzzle and planning domains. Rational allows additional cuts in evaluations, at the expense of being less informed than . However, due to a rational tradeoff, this allows for an additional speedup, and Rational achieves the best overall performance in our domains.
is simpler to implement than its direct competitor, Sel-MAX, but its decision can be more informed. When has to decide whether to compute for some node , it already knows that . By contrast, although Sel-MAX uses a much more complicated decision rule, it makes its decision when is first generated, and does not know whether will be informative enough to prune . Rational outperforms Sel-MAX in our planning experiments.
and its analysis can be seen as an instance of the rational meta-reasoning framework [Russell and Wefald1991]. While this framework is very general, it is extremely hard to apply in practice. Recent work exists on meta-reasoning in DFS algorithms for CSP) [Tolpin and Shimony2011] and in Monte-Carlo tree search [Hay et al.2012]. This paper applies these methods successfully to a variant of . There are numerous other ways to use rational meta-reasoning to improve , starting from generalizing to handle more than two heuristics, to using the meta-level to control decisions in other variants of . All these potential extensions provide fruitful ground for future work.
The research was supported by the Israeli Science Foundation (ISF) under grant #305/09 to Ariel Felner and Eyal Shimony and by Lynne and William Frankel Center for Computer Science.
- [Dechter and Pearl1985] R. Dechter and J. Pearl. Generalized best-first search strategies and the optimality of A*. Journal of the ACM, 32(3):505–536, 1985.
- [Domshlak et al.2012] Carmel Domshlak, Erez Karpas, and Shaul Markovitch. Online speedup learning for optimal planning. JAIR, 44:709–755, 2012.
- [Felner et al.2011] A. Felner, U. Zahavi, R. Holte, J. Schaeffer, N. Sturtevant, and Z. Zhang. Inconsistent heuristics in theory and practice. Artificial Intelligence, 175(9-10):1570–1603, 2011.
- [Felner et al.2012] A. Felner, M. Goldenberg, G. Sharon, R. Stern, T. Beja, N. R. Sturtevant, J. Schaeffer, and Holte R. Partial-expansion a* with selective node generation. In AAAI, pages 471–477, 2012.
- [Hart et al.1968] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, SCC-4(2):100–107, 1968.
- [Hay et al.2012] Nicholas Hay, Stuart Russell, David Tolpin, and Solomon Eyal Shimony. Selecting computations: Theory and applications. In Nando de Freitas and Kevin P. Murphy, editors, UAI, pages 346–355. AUAI Press, 2012.
- [Helmert and Domshlak2009] Malte Helmert and Carmel Domshlak. Landmarks, critical paths and abstractions: What’s the difference anyway? In ICAPS, pages 162–169, 2009.
- [Helmert2006] Malte Helmert. The Fast Downward planning system. JAIR, 26:191–246, 2006.
- [Karpas and Domshlak2009] Erez Karpas and Carmel Domshlak. Cost-optimal planning with landmarks. In IJCAI, pages 1728–1733, 2009.
- [Korf and Felner2002] R. E. Korf and A. Felner. Disjoint pattern database heuristics. Artificial Intelligence, 134(1-2):9–22, 2002.
- [Korf1985] R. E. Korf. Depth-first iterative-deepening: An optimal admissible tree search. Artificial Intelligence, 27(1):97–109, 1985.
- [Russell and Wefald1991] Stuart Russell and Eric Wefald. Principles of metereasoning. Artificial Intelligence, 49:361–395, 1991.
- [Stern et al.2010] Roni Stern, Tamar Kulberis, Ariel Felner, and Robert Holte. Using lookaheads with optimal best-first search. In AAAI, pages 185–190, 2010.
- [Sun et al.2009] X. Sun, W. Yeoh, P. Chen, and S. Koenig. Simple optimization techniques for A*-based search. In AAMAS, pages 931–936, 2009.
- [Thayer and Ruml2011] Jordan T. Thayer and Wheeler Ruml. Bounded suboptimal search: A direct approach using inadmissible estimates. In Proceedings of the Twenty-second International Joint Conference on Artificial Intelligence (IJCAI-11), 2011.
- [Tolpin and Shimony2011] David Tolpin and Solomon Eyal Shimony. Rational deployment of CSP heuristics. In Toby Walsh, editor, IJCAI, pages 680–686. IJCAI/AAAI, 2011.
- [Zhang and Bacchus2012] Lei Zhang and Fahiem Bacchus. Maxsat heuristics for cost optimal planning. In AAAI, 2012.