Towards Rational Deployment of Multiple Heuristics in A*

05/22/2013 ∙ by David Tolpin, et al. ∙ Ben-Gurion University of the Negev 0

The obvious way to use several admissible heuristics in A* is to take their maximum. In this paper we aim to reduce the time spent on computing heuristics. We discuss Lazy A*, a variant of A* where heuristics are evaluated lazily: only when they are essential to a decision to be made in the A* search process. We present a new rational meta-reasoning based scheme, rational lazy A*, which decides whether to compute the more expensive heuristics at all, based on a myopic value of information estimate. Both methods are examined theoretically. Empirical evaluation on several domains supports the theoretical results, and shows that lazy A* and rational lazy A* are state-of-the-art heuristic combination methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The  algorithm [Hart et al.1968] is a best-first heuristic search algorithm guided by the cost function . If the heuristic is admissible (never overestimates the real cost to the goal) then the set of nodes expanded by  is both necessary and sufficient to find the optimal path to the goal [Dechter and Pearl1985].

This paper examines the case where we have several available admissible heuristics. Clearly, we can evaluate all these heuristics, and use their maximum as an admissible heuristic, a scheme we call . The problem with naive maximization is that all the heuristics are computed for all the generated nodes. In order to reduce the time spent on heuristic computations, Lazy (or , for short) evaluates the heuristics one at a time, lazily. When a node is generated,  only computes one heuristic, , and adds to Open. Only when re-emerges as the top of Open is another heuristic, , evaluated; if this results in an increased heuristic estimate, is re-inserted into Open. This idea was briefly mentioned by zhang-bacchus:aaai-2012 (zhang-bacchus:aaai-2012) in the context of the MAXSAT heuristic for planning domains.  is as informative as , but can significantly reduce search time, as we will not need to compute for many nodes. In this paper we provide a deeper examination of , and characterize the savings that it can lead to. In addition, we describe several technical optmizations for .

 reduces the search time, while maintaining the informativeness of . However, as noted by domshlak-et-al:jair-2012 (domshlak-et-al:jair-2012), if the goal is to reduce search time, it may be better to compute a fast heuristic on several nodes, rather than to compute a slow but informative heuristic on only one node. Based on this idea, they formulated selective max (Sel-MAX), an online learning scheme which chooses one heuristic to compute at each state. Sel-MAX chooses to compute the more expensive heuristic for node

when its classifier predicts that

is greater than some threshold, which is a function of heuristic computation times and the average branching factor. INCJUR (INCJUR) showed that randomizing a heuristic and applying bidirectional pathmax (BPMX) might sometimes be faster than evaluating all heuristics and taking the maximum. This technique is only useful in undirected graphs, and is therefore not applicable to some of the domains in this paper. Both Sel-MAX and Random compute the resulting heuristic once, before each node is added to Open while  computes the heuristic lazily, in different steps of the search. In addition, both randomization and Sel-MAX save heuristic computations and thus reduce search time in many cases. However, they might be less informed than pure maximization and as a result expand a larger number of nodes.

We then combine the ideas of lazy heuristic evaluation and of trading off more node expansions for less heuristic computation time, into a new variant of  called rational lazy  ().  is based on rational meta-reasoning, and uses a myopic value-of-information criterion to decide whether to compute or to bypass the computation of and expand immediately when re-emerges from Open.  aims to reduce search time, even at the expense of more node expansions than .

Empirical results on variants of the 15-puzzle and on numerous planning domains demonstrate that  and  lead to state-of-the-art performance in many cases.

2 Lazy

Throughout this paper we assume for clarity that we have two available admissible heuristics, and . Extension to multiple heuristics is straightforward, at least for . Unless stated otherwise, we assume that is faster to compute than but that is weakly more informed, i.e., for the majority of the nodes , although counter cases where are possible. We say that dominates , if such counter cases do not exist and for all nodes . We use to denote . Likewise, denotes , and denotes . We denote the cost of the optimal solution by . Additionally, we denote the computation time of and of by and , respectively and denote the overhead of an insert/pop operation in Open by . Unless stated otherwise we assume that is much greater than .  thus mainly aims to reduce computations of .

Input: LAZY-
1 Apply all heuristics to Start
2 Insert Start into Open
3 while Open  not empty do
4        best node from Open
5        if Goal(n) then
6               return trace(n)
7              
8       if  was not applied to  then
9               Apply to
10               insert into Open
11               continue        //next node in OPEN
12              
13       foreach child of  do
14               Apply to .
15               insert into Open
16              
17       Insert into Closed
18       
return FAILURE
Algorithm 1 Lazy

The pseudo-code for  is depicted as Algorithm 1, and is very similar to . In fact, without lines 7 – 10,  would be identical to  using the heuristic. When a node is generated we only compute and is added to Open (Lines 11 – 13), without computing yet. When is first removed from Open (Lines 7 – 10), we compute and reinsert it into Open, this time with .

It is easy to see that  is as informative as , in the sense that both  and expand a node only if is the best -value in Open. Therefore,  and  generate and expand and the same set of nodes, up to differences caused by tie-breaking.

In its general form  generates many nodes that it does not expand. These nodes, called surplus nodes [Felner et al.2012], are in Open when we expand the goal node with . All nodes in Open with are surely surplus but some nodes with may also be surplus. The number of surplus nodes in OPEN can grow exponentially in the size of the domain, resulting in significant costs.

 avoids computations for many of these surplus nodes. Consider a node that is generated with . This node is inserted into Open but will never reach the top of Open, as the goal node will be found with . In fact, if Open breaks ties in favor of small -values, the goal node with will be expanded as soon as it is generated and such savings of will be obtained for some nodes with too. We refer to such nodes where we saved the computation of as good nodes. Other nodes, those with (and some with ) are called regular nodes as we apply both heuristics to them.

 computes both and for all generated nodes, spending time on all generated nodes. By contrast, for good nodes  only spends , and saves . In the basic implementation of  (as in algorithm 1) regular nodes are inserted into OPEN twice, first for (Line 13) and then for (Line 9) while good nodes only enter Open once (Line 13). Thus,  has some extra overhead of Open operations for regular nodes. We distinguish between 3 classes of nodes:
(1) expanded regular (ER) — nodes that were expanded after both heuristics were computed.
(2) surplus regular (SR) — nodes for which was computed but are still in Open when the goal was found.
(3) surplus good (SG) — nodes for which only was computed by  when the goal was found.

Alg ER SR SG
Table 1: Time overhead for  and for

The time overhead of  and  is summarized in Table 1.  incurs more Open operations overhead, but saves computations for the SG nodes. When (boldface in table 1) is significantly greater than both and there is a clear advantage for , as seen in the SG column.

3 Enhancements to Lazy

Several enhancements can improve basic  (Algorithm 1), which are effective especially if and are not negligible.

3.1 Open bypassing

Suppose node was just generated, and let denote the best -value currently in Open.  evaluates and then inserts into Open. However, if , then will immediately reach the top of Open and will be computed. In such cases we can choose to compute right away (after Line 12 in Algorithm 1), thus saving the overhead of inserting into Open and popping it again at the next step (). For such nodes,  is identical to , as both heuristics are computed before the node is added to Open. This enhancement is called OPEN bypassing (OB). It is a reminiscent of the immediate expand technique applied to generated nodes [Stern et al.2010, Sun et al.2009]. The same technique can be applied when again reaches the top of Open when evaluating ; if , expand right away. With OB,  will incur the extra overhead of two Open cycles only for nodes where and then later .

3.2 Heuristic bypassing

Heuristic bypassing (HBP) is a technique that allows

 to omit evaluating one of the two heuristics. HBP is probably used by many implementers, although to the best of our knowledge, it never appeared in the literature. HBP works for a node

under the following two preconditions: (1) the operator between and its parent is bidirectional, and (2) both heuristics are consistent [Felner et al.2011].

Figure 1: Example of HBP

Let be the cost of the operator. Since the heuristic is consistent we know that . Therefore, provides the following upper- and lower-bounds on of . We thus denote and .

To exploit HBP in , we simply skip the computation of if , and vice versa. For example, consider node in Figure 1, where all operators cost 1, , and . Based on our bounds and . Thus, there is no need to check as will surely be the maximum. We can propagate these bounds further to node . while and again there is no need to evaluate . Only in the last node we get that but since then can potentially return the maximum and should thus be evaluated.

HBP can be combined in  in a number of ways. We describe the variant we used.  aims to avoid needless computations of . Thus, when , we delay the computation of and add to Open  with and continue as in . In this case, we saved , delayed and used which is more informative than . If, however, , then we compute and continue regularly. We note that HBP incurs the time and memory overheads of computing and storing four bounds and should only be applied if there is enough memory and if and especially are very large.

4 Rational Lazy

 offers us a very strong guarantee, of expanding the same set of nodes as . However, often we would prefer to expand more states, if it means reducing search time. We now present Rational Lazy A* (), an algorithm which attempts to optimally manage this tradeoff.

Using principles of rational meta-reasoning [Russell and Wefald1991], theoretically every algorithm action (heuristic function evaluation, node expansion, open list operation) should be treated as an action in a sequential decision-making meta-level problem: actions should be chosen so as to achieve the minimal expected search time. However, the appropriate general meta-reasoning problem is extremely hard to define precisely and to solve optimally.

Therefore, we focus on just one decision type, made in the context of , when re-emerges from Open (Line 7). We have two options: (1) Evaluate the second heuristic and add the node back to Open (Lines 7-10) like , or (2) bypass the computation of and expand right way (Lines 11 -13), thereby saving time by not computing , at the risk of additional expansions and evaluations of . In order to choose rationally, we define a criterion based on value of information (VOI) of evaluating in this context.

The only addition of  to  is the option to bypass computations (Lines 7-10). Suppose that we choose to compute — this results in one of the following outcomes:
  1: is still expanded, either now or eventually.
  2: is re-inserted into Open, and the goal is found without ever expanding .

Computing is helpful only in outcome 2, where potential time savings are due to pruning a search subtree at the expense of the time . However, whether outcome 2 takes place after a given state is not known to the algorithm until the goal is found, and the algorithm must decide whether to evaluate according to what it believes to be the probability of each of the outcomes. We derive a rational policy for when to evaluate , under the myopic assumption that the algorithm continues to behave like  afterwards (i.e., it will never again consider bypassing the computation of ).

The time wasted by being sub-optimal in deciding whether to evaluate is called the regret of the decision. If is not helpful and we decide to compute it, the effort for evaluating turns out to be wasted. On the other hand, if is helpful but we decide to bypass it, we needlessly expand . Due to the myopic assumption,  would evaluate both and for all successors of .

Compute Bypass
helpful 0
not helpful 0
Table 2: Regret in Rational Lazy A*

Table 2 summarizes the regret of each possible decision, for each possible future outcome; each column in the table represents a decision, while each row represents a future outcome. In the table, is the to time compute and re-insert into Open thus delaying the expansion of , is the time to remove from Open, expand , evaluate on each of the (“local branching factor”) children of , and insert into the open list. Computing needlessly wastes time . Bypassing computation when would have been helpful wastes time, but because computing would have cost , the regret is .

Let us denote the probability that is helpful by . The expected regret of computing is thus . On the other hand, the expected regret of bypassing is . As we wish to minimize the expected regret, we should thus evaluate just when:

(1)

or equivalently:

(2)

If , then the expected regret is minimized by always evaluating , regardless of the values of and . In these cases,  cannot be expected to do better than . For example, in the 15-puzzle and its variants, the effective branching factor is . Therefore, if is expected to be helpful for more than half of the nodes on which  evaluates , then one should simply use .

(Using Eq. 6)
lookahead generated time generated Good1 time generated Good1 Good2 time
2 1,206,535 0.707 1,206,535 391,313 815,213 0.820 1,309,574 475,389 394,863 439,314 0.842
4 1,066,851 0.634 1,066,851 333,047 733,794 0.667 1,169,020 411,234 377,019 380,760 0.650
6 889,847 0.588 889,847 257,506 632,332 0.533 944,750 299,470 239,320 405,951 0.464
8 740,464 0.648 740,464 196,952 543,502 0.527 793,126 233,370 218,273 341,476 0.377
10 611,975 0.843 611,975 145,638 466,327 0.671 889,220 308,426 445,846 134,943 0.371
12 454,130 0.927 454,130 95,068 359,053 0.769 807,846 277,778 428,686 101,378 0.429
Table 3: Weighted 15 puzzle: comparison of , Lazy , and Rational Lazy

For , the decision of whether to evaluate depends on the values of and :

(3)

Denote by the time to generate the children of . Then:

(4)

By substituting (4) into (3), obtain: evaluate if:

(5)

The factor depends on the potentially unknown probability , making it difficult to reach the optimum decision. However, if our goal is just to do better than , then it is safe to replace by an upper bound on . Note that the values may actually be variables that depend in complicated ways on the state of the search. Despite that, the very crude model we use, assuming that they are setting-specific constants, is sufficient to achieve improved performance, as shown in Section 5.

We now turn to implementation-specific estimation of the runtimes. Open in  is frequently implemented as a priority queue, and thus we have, approximately, for some , where the size of Open is . Evaluating is cheap in many domains, as is the case with Manhattan Distance (MD) in discrete domains, is the most significant part of . In such cases, rule (5) can be approximated as 6:

(6)

Rule (6) recommends to evaluate mostly at late stages of the search, when the open list is large, and in nodes with a higher branching factor.

In other domains, such as planning, both and are significantly greater than both and , and terms not involving or can be dropped from (5), resulting in Rule (7):

(7)

The right hand side of (7) grows with , and here it is beneficial to evaluate only for nodes with a sufficiently large branching factor.

5 Empirical evaluation

We now present our empirical evaluation of  and , on variants of the 15-puzzle and on planning domains.

5.1 Weighted 15 puzzle

We first provide evaluations on the weighted 15-puzzle variant [Thayer and Ruml2011], where the cost of moving each tile is equal to the number on the tile. We used a subset of 36 problem instances (out of the 100 instances of BFID85 (BFID85)) which could be solved with 2Gb of RAM and 15 minutes timeout using the Weighted Manhattan heuristic (WMD) for . As the expensive and informative heuristic we use a heuristic based on lookaheads [Stern et al.2010]. Given a bound we applied a bounded depth-first search from a node and backtracked when we reached leaf nodes for which . -values from leaves were propagated to .

Table 3 presents the results averaged on all instances solved. The runtimes are reported relative to the time of  with WMD (with no lookahead), which generated 1,886,397 nodes (not reported in the table). The first 3 columns of Table 3 show the results for  with the lookahead heuristic for different lookahead depths. The best time is achieved for lookahead 6 (0.588 compared to  with WMD). The fact that the time does not continue to decrease with deeper lookaheads is clearly due to the fact that although the resulting heuristic improves as a function of lookahead depth (expanding and generating fewer nodes), the increasing overheads of computing the heuristic eventually outweights savings due to fewer expansions.

The next 4 columns show the results for  with WMD as , lookahead as , for different lookahead depths. The Good1 column presents the number of nodes where  saved the computation of while the column presents the number of nodes where was computed. Roughly of nodes were Good1 and since was the most dominant time cost, most of this saving is reflected in the timing results. The best results are achieved for lookahead 8, with a runtime of 0.527 compared to  with WMD.

The final columns show the results of  , with the values of calibrated for each lookahead depth using a small subset of problem instances. The Good2 column counts the number of times that  decided to bypass the computation. Observe that  outperforms , which in turn outperforms , for most lookahead depths. The lowest time with  (0.371 of  with WMD) was obtained for lookahead 10. That is achieved as the more expensive heuristic is computed less often, reducing its effective computational overhead, with some adverse effect in the number of expanded nodes. Although  expanded fewer nodes,  performed much fewer computations as can be seen in the table, resulting in decreased overall runtimes.

5.2 Planning domains

Problems Solved Planning Time (seconds) GOOD
Domain lmcut max selmax lmcut max selmax
airport 25 24 26 25 29 29 0.29 0.57 0.5 0.33 0.38 0.38 0.48 0.67
barman-opt11 4 0 0 0 0 3 N/A N/A N/A N/A N/A N/A N/A N/A
blocks 26 27 27 27 28 28 1.0 0.65 0.73 0.81 0.67 0.67 0.19 0.21
depot 7 6 5 5 6 6 2.27 2.69 3.17 3.14 2.73 2.75 0.06 0.06
driverlog 10 12 12 12 12 12 2.65 0.29 0.33 0.36 0.3 0.31 0.09 0.09
elevators-opt08 12 18 17 17 17 17 14.14 4.21 4.84 4.85 3.56 3.64 0.27 0.27
elevators-opt11 10 14 14 14 14 14 26.97 8.03 9.28 9.28 6.64 6.78 0.28 0.28
floortile-opt11 2 6 6 6 6 6 8.52 0.44 0.6 0.58 0.5 0.52 0.02 0.02
freecell 54 10 36 51 41 41 0.16 7.34 0.22 0.24 0.18 0.18 0.86 0.86
grid 2 2 1 2 2 2 0.1 0.16 0.18 0.34 0.15 0.15 0.17 0.17
gripper 7 6 6 6 6 6 0.84 1.53 2.24 2.2 1.78 1.25 0.01 0.4
logistics00 20 17 16 20 19 19 0.23 0.57 0.68 0.27 0.47 0.47 0.51 0.51
logistics98 3 6 6 6 6 6 0.72 0.1 0.1 0.11 0.1 0.1 0.07 0.07
miconic 141 140 140 141 141 141 0.13 0.55 0.58 0.57 0.16 0.16 0.87 0.88
mprime 16 20 20 20 21 20 1.27 0.5 0.51 0.5 0.44 0.45 0.25 0.25
mystery 13 15 15 15 15 15 0.71 0.35 0.38 0.43 0.36 0.37 0.3 0.3
nomystery-opt11 18 14 16 18 18 18 0.18 1.29 0.58 0.25 0.33 0.33 0.72 0.72
openstacks-opt08 15 16 14 15 16 16 2.88 1.68 3.89 3.03 2.62 2.64 0.44 0.45
openstacks-opt11 10 11 9 10 11 11 13.59 6.96 19.8 14.44 12.03 12.06 0.43 0.43
parcprinter-08 14 18 18 18 18 18 0.92 0.36 0.37 0.38 0.37 0.37 0.17 0.26
parcprinter-opt11 10 13 13 13 13 13 2.24 0.56 0.6 0.61 0.58 0.59 0.14 0.17
parking-opt11 1 1 1 3 2 2 9.74 22.13 17.85 7.11 6.33 6.43 0.64 0.64
pathways 4 5 5 5 5 5 0.5 0.1 0.1 0.1 0.1 0.1 0.1 0.12
pegsol-08 27 27 27 27 27 27 1.01 0.84 1.2 1.1 1.06 0.95 0.04 0.42
pegsol-opt11 17 17 17 17 17 17 4.91 3.63 5.85 5.15 4.87 4.22 0.04 0.38
pipesworld-notankage 16 15 15 16 15 15 0.5 1.48 1.12 0.85 0.9 0.91 0.42 0.42
pipesworld-tankage 11 8 9 9 9 9 0.36 2.24 1.02 0.47 0.69 0.71 0.62 0.62
psr-small 49 48 48 49 48 48 0.15 0.2 0.21 0.19 0.19 0.18 0.17 0.49
rovers 6 7 7 7 7 7 0.74 0.41 0.45 0.45 0.41 0.42 0.47 0.47
scanalyzer-08 6 13 13 13 13 13 0.37 0.25 0.27 0.27 0.26 0.26 0.06 0.06
scanalyzer-opt11 3 10 10 10 10 10 0.59 0.64 0.75 0.73 0.67 0.68 0.05 0.05
sokoban-opt08 23 25 25 24 26 27 3.94 1.76 2.19 2.96 1.9 1.32 0.04 0.4
sokoban-opt11 19 19 19 18 19 19 7.26 2.83 3.66 5.19 3.1 2.02 0.03 0.46
storage 14 15 14 14 15 15 0.36 0.44 0.49 0.45 0.44 0.42 0.21 0.28
tidybot-opt11 14 12 12 12 12 12 3.03 16.32 17.55 9.35 15.67 15.02 0.11 0.18
tpp 6 6 6 6 6 6 0.39 0.22 0.23 0.23 0.22 0.22 0.32 0.4
transport-opt08 11 11 11 11 11 11 1.45 1.24 1.41 1.54 1.25 1.26 0.04 0.04
transport-opt11 6 6 6 6 6 6 12.46 8.5 10.38 11.13 8.56 8.61 0.0 0.0
trucks 7 9 9 9 9 9 4.49 1.34 1.52 1.44 1.41 1.42 0.07 0.07
visitall-opt11 12 10 13 12 13 13 0.2 0.34 0.19 0.18 0.18 0.18 0.38 0.38
woodworking-opt08 12 16 16 16 16 16 1.08 0.71 0.75 0.75 0.66 0.67 0.56 0.56
woodworking-opt11 7 11 11 11 11 11 5.7 2.86 3.15 3.01 2.55 2.58 0.52 0.52
zenotravel 8 11 11 11 11 11 0.38 0.14 0.14 0.14 0.14 0.14 0.17 0.19
OVERALL 698 697 722 747 747 750 1.18 0.98 0.98 0.89 0.79 0.77 0.27 0.34
Table 4: Planning Domains — Number of Problems Solved, Total Planning Time, and Fraction of Good Nodes

We implemented  and  on top of the Fast Downward planning system [Helmert2006], and experimented with two state of the art heuristics: the admissible landmarks heuristic (used as ) [Karpas and Domshlak2009], and the landmark cut heuristic [Helmert and Domshlak2009] (used as ). On average, computation is 8.36 times more expensive than computation. We did not implement HBP in the planning domains as the heuristics we use are not consistent and in general the operators are not invertible. We also did not implement OB, as the cost of Open operations in planning is trivial compared to the cost of heuristic evaluations.

We experimented with all planning domains without conditional effects and derived predicates (which the heuristics we used do not support) from previous IPCs. We compare the performance of  and  to that of  using each of the heuristics individually, as well as to their max-based combination, and their combination using selective-max (Sel-MAX) [Domshlak et al.2012]. The search was limited to 6GB memory, and 5 minutes of CPU time on a single core of an Intel E8400 CPU with 64-bit Linux OS.

When applying  in planning domains we evaluate rule (7) at every state. This rule involves two unknown quantities: , the ratio between heuristic computations times, and , the probability that is helpful. Estimating is quite easy — we simply use the average computation times of both heuristics, which we measure as search progresses.

Estimating is not as simple. While it is possible to empirically determine the best value for , as done for the weighted 15 puzzle, this does not fit the paradigm of domain-independent planning. Furthermore, planning domains are very different from each other, and even problem instances in the same domain are of varying size, and thus getting a single value for which works well for many problems is difficult. Instead, we vary our estimate of adaptively during search. To understand this estimate, first note that if is a node at which was helpful, then we computed for , but did not expand . Thus, we can use the number of states for which we computed that were not yet expanded (denoted by ), divided by the number of states for which we computed (denoted by ), as an approximation of . However, is not likely to be a stable estimate at the beginning of the search, as and are both small numbers. To overcome this problem, we “imagine” we have observed examples, which give us an estimate of , and use a weighted average between these examples, and the observed examples — that is, we estimate by . In our empirical evaluation, we used and .

Table 4 depicts the experimental results. The leftmost part of the table shows the number of solved problems in each domain. As the table demonstrates,  solves the most problems, and  solves the same number of problems as Sel-MAX. Thus, both  and  are state-of-the-art in cost-optimal planning. Looking more closely at the results, note that Sel-MAX solves 10 more problems than  and  in the freecell domain. Freecell is one of only three domains in which is more informed than  (the other two are nomystery-opt11 and visitall-opt11), violating the basic assumptions behind   that is more informed than . If we ignore these domains, both  and  solve more problems than Sel-MAX.

The middle part of the Table 4

shows the geometric mean of planning time in each domain, over the commonly solved problems (i.e., those that were solved by all 6 methods).

 is the fastest overall, with  second. It is important to note that both  and  are very robust, and even in cases where they are not the best they are never too far from the best. For example, consider the miconic domain. Here, is very informative and thus the variant that only computed is the best choice (but a bad choice overall). Observe that both  and  saved 86% of computations, and were very close to the best algorithm in this extreme case. In contrast, the other algorithms that consider both heuristics (max and Sel-MAX) performed very poorly here (more than three times slower).

The rightmost part of Table 4 shows the average fraction of nodes for which   and  did not evaluate the more expensive heuristic, , over the problems solved by both these methods. This is shown in the good columns. Our first observation is that this fraction varies between different domains, indicating why  works well in some domains, but not in others. Additionally, we can see that in domains where there is a difference in this number between  and ,  usually performs better in terms of time. This indicates that when  decides to skip the computation of the expensive heuristic, it is usually the right decision.

Expanded Generated
183,320,267 1,184,443,684
lmcut 23,797,219 114,315,382
22,774,804 108,132,460
selmax 54,557,689 193,980,693
22,790,804 108,201,244
25,742,262 110,935,698
Table 5: Total Number of Expanded and Generated States

Finally, Table 5 shows the total number of expanded and generated states over all commonly solved problems.  is indeed as informative as  (the small difference is caused by tie-breaking), while  is a little less informed and expands slightly more nodes. However,  is much more informative than its “intelligent” competitor - Sel-MAX, as these are the only two algorithms in our set which selectively omit some heuristic computations.  generated almost half of the nodes compared to Sel-MAX, suggesting that its decisions are better.

5.3 Limitations of : 15 puzzle example

Some domains and heuristic settings will not achieve time speedup with L. An example is the regular, unweighed 15-puzzle. Results for  and  with and without HBP on the 15-puzzle are reported in Table 6. () count the number of nodes where HBP pruned the need to compute (resp. ). OB is the number of nodes where OB was helpful. Bad is the number of nodes that went through two Open cycles. Finally, Good is the number of nodes where computation of was saved due to .

In the first experiment, Manhattan distance (MD) was divided into two heuristics: and used as and . Results are averaged over 100 random instances with average solution depth of 26.66. As seen from the first two lines, HBP when applied on top of saved about 36% of the heuristic evaluations. Next are results for  and +HBP. Many nodes are pruned by HBP, or OB. The number of good nodes dropped from 28% (Line 3) to as little as 11% when HBP was applied. Timing results (in ms) show that all variants performed equally. The reason is that the time overhead of the and heuristics is very small so the saving on these 28% or 11% of nodes was not significant to outweigh the HBP overhead of handling the upper and lower bounds.

The next experiment is with MD as and a variant of the additive 7-8 PDBs [Korf and Felner2002], as . Here we can observe an interesting phenomenon. For , most nodes were caught by either HBP (when applicable) or by OB. Only 4% of the nodes were good nodes. The reason is that the 7-8 PDB heuristic always dominates MD and is always the maximum among the two. Thus, 7-8 PDB was needed at early stages (e.g. by OB) and MD itself almost never caused nodes to be added to Open and remain there until the goal was found.

These results indicate that on such domains,  has limited merit. Due to uniform operator cost and the heuristics being consistent and simple to compute, very little space is left for improvement with good nodes. We thus conclude that  is likely to be effective when there is significant difference between and , and/or operators that are not bidirectional and/or with non-uniform costs, allowing for more good nodes and significant time saving.

Alg. Generated HBP1 HBP2 OB Bad Good time
, , Depth = 26.66
A* 1,085,156 0 0 0 0 0 415
A*+HBP 1,085,156 216,689 346,335 0 0 0 417
LA* 1,085,157 0 0 734,713 37,750 312,694 417
LA*+HBP 1,085,157 140,746 342,178 589,893 37,725 115,361 416
Manhattan distance, 7-8 PDB, Depth 52.52
A* 43,741 0 0 0 0 0 34.7
A*+HBP 43,804 30,136 1,285 0 0 0 33.6
LA* 43,743 0 0 42,679 47 1,017 34.2
LA*+HBP 43,813 7,669 1,278 42,271 21 243 33.3
Table 6: Results on the 15 puzzle

6 Conclusion

We discussed two schemes for decreasing heuristic evaluation times.  is very simple to implement and is as informative as .  can significantly speed up the search, especially if dominates the other time costs, as seen in weighted 15 puzzle and planning domains. Rational  allows additional cuts in evaluations, at the expense of being less informed than . However, due to a rational tradeoff, this allows for an additional speedup, and Rational  achieves the best overall performance in our domains.

 is simpler to implement than its direct competitor, Sel-MAX, but its decision can be more informed. When  has to decide whether to compute for some node , it already knows that . By contrast, although Sel-MAX uses a much more complicated decision rule, it makes its decision when is first generated, and does not know whether will be informative enough to prune . Rational  outperforms Sel-MAX in our planning experiments.

 and its analysis can be seen as an instance of the rational meta-reasoning framework [Russell and Wefald1991]. While this framework is very general, it is extremely hard to apply in practice. Recent work exists on meta-reasoning in DFS algorithms for CSP) [Tolpin and Shimony2011] and in Monte-Carlo tree search [Hay et al.2012]. This paper applies these methods successfully to a variant of . There are numerous other ways to use rational meta-reasoning to improve , starting from generalizing  to handle more than two heuristics, to using the meta-level to control decisions in other variants of . All these potential extensions provide fruitful ground for future work.

7 Acknowledgments

The research was supported by the Israeli Science Foundation (ISF) under grant #305/09 to Ariel Felner and Eyal Shimony and by Lynne and William Frankel Center for Computer Science.

References

  • [Dechter and Pearl1985] R. Dechter and J. Pearl. Generalized best-first search strategies and the optimality of A*. Journal of the ACM, 32(3):505–536, 1985.
  • [Domshlak et al.2012] Carmel Domshlak, Erez Karpas, and Shaul Markovitch. Online speedup learning for optimal planning. JAIR, 44:709–755, 2012.
  • [Felner et al.2011] A. Felner, U. Zahavi, R. Holte, J. Schaeffer, N. Sturtevant, and Z. Zhang. Inconsistent heuristics in theory and practice. Artificial Intelligence, 175(9-10):1570–1603, 2011.
  • [Felner et al.2012] A. Felner, M. Goldenberg, G. Sharon, R. Stern, T. Beja, N. R. Sturtevant, J. Schaeffer, and Holte R. Partial-expansion a* with selective node generation. In AAAI, pages 471–477, 2012.
  • [Hart et al.1968] P. E. Hart, N. J. Nilsson, and B. Raphael. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, SCC-4(2):100–107, 1968.
  • [Hay et al.2012] Nicholas Hay, Stuart Russell, David Tolpin, and Solomon Eyal Shimony. Selecting computations: Theory and applications. In Nando de Freitas and Kevin P. Murphy, editors, UAI, pages 346–355. AUAI Press, 2012.
  • [Helmert and Domshlak2009] Malte Helmert and Carmel Domshlak. Landmarks, critical paths and abstractions: What’s the difference anyway? In ICAPS, pages 162–169, 2009.
  • [Helmert2006] Malte Helmert. The Fast Downward planning system. JAIR, 26:191–246, 2006.
  • [Karpas and Domshlak2009] Erez Karpas and Carmel Domshlak. Cost-optimal planning with landmarks. In IJCAI, pages 1728–1733, 2009.
  • [Korf and Felner2002] R. E. Korf and A. Felner. Disjoint pattern database heuristics. Artificial Intelligence, 134(1-2):9–22, 2002.
  • [Korf1985] R. E. Korf. Depth-first iterative-deepening: An optimal admissible tree search. Artificial Intelligence, 27(1):97–109, 1985.
  • [Russell and Wefald1991] Stuart Russell and Eric Wefald. Principles of metereasoning. Artificial Intelligence, 49:361–395, 1991.
  • [Stern et al.2010] Roni Stern, Tamar Kulberis, Ariel Felner, and Robert Holte. Using lookaheads with optimal best-first search. In AAAI, pages 185–190, 2010.
  • [Sun et al.2009] X. Sun, W. Yeoh, P. Chen, and S. Koenig. Simple optimization techniques for A*-based search. In AAMAS, pages 931–936, 2009.
  • [Thayer and Ruml2011] Jordan T. Thayer and Wheeler Ruml. Bounded suboptimal search: A direct approach using inadmissible estimates. In Proceedings of the Twenty-second International Joint Conference on Artificial Intelligence (IJCAI-11), 2011.
  • [Tolpin and Shimony2011] David Tolpin and Solomon Eyal Shimony. Rational deployment of CSP heuristics. In Toby Walsh, editor, IJCAI, pages 680–686. IJCAI/AAAI, 2011.
  • [Zhang and Bacchus2012] Lei Zhang and Fahiem Bacchus. Maxsat heuristics for cost optimal planning. In AAAI, 2012.