We extend a result by Valtorta  on the complexity of heuristic estimates for the algorithm in a way that makes it applicable to some problems that have arisen in recent work in planning and search [6, 12]. We start by giving an informal description of the new result that contrasts it with the original one. Definitions and proofs will be given in the body of the paper.
Both our paper and  present complexity results in the state-space model of problem-solving. Both use the number of node expansions performed by the algorithm as a measure of the cost of solving a (state-space) problem using a (heuristic or blind) search algorithm. (By a “blind” search algorithm we mean one that uses no cost estimates or uninformative cost estimates.) Both consider the cost of solving a problem as the total cost of computing the heuristic plus the cost of using with that heuristic. In both cases, the heuristics are viewed as solutions of relaxed subproblems , also known as auxiliary problems 111The relaxed subproblems of  and the auxiliary problems of  are special cases of the abstracted problems of [12, 14]. We do not claim that our results apply for the more general definition of [12, 14], except in the special cases where that definition coincides with the older ones..
Valtorta’s 1984 work was motivated by the attempt to spur research on automating the efficient computation of heuristics by showing that, with (certain classes of) inefficient heuristics, was no better than brute-force search (Dijkstra’s algorithm). This paper is motivated by the need to determine how best to use (in a sense to be explained later) a set of available heuristics.
Let , , be three problems such that , where the relation is the auxiliarity relation. Informally, this notation indicates that is the base problem, is a relaxed subproblem of , and is a relaxed subproblem of . A natural question is that of how to choose a relaxed subproblem (of which there are many) so that the cost of computing the heuristic (by solving the relaxed subproblem) plus the cost of solving the base problem (by using the heuristic) is minimum. This is the question considered in  and , whose key result we summarize after the introduction of necessary notation.
Let denote the fact that algorithm “largely dominates” algorithm , i.e., in a certain technical sense (to be explained later), is at least as efficient as .
Let denote the algorithm, applied to the problem , informed by heuristics derived from the relaxed model , where instances of are solved by an algorithm. If instances of are solved by an search, using , we use the notation . The pattern of this notation allows us to abbreviate this as , i.e., as a hierarchy of relaxed models, where the last relaxed model is solved algorithmically. (See Figure 1.)
Let denote an element of the class of most-relaxed models, i.e., a state-space graph in which there is a path of zero cost between every pair of states. is said to be a blind or brute-force algorithm, as it uses a completely uninformative heuristic. Since Dijkstra’s algorithm is optimal in a large class of blind search algorithms, as shown in , by we mean with the tie-breaking rule (for node expansions) that makes it equivalent to Dijkstra’s algorithm.
The original result  is:
The new result (which summarizes both Theorems 1 and 2) is:
and as a simple corollary,
In words, while Valtorta showed that there is no efficient way of solving a problem with when the cost of computing the heuristic using a blind method is counted, we now show that the most efficient way of using a hierarchy of relaxed models, culminating in a heuristic that can be computed algorithmically, is to ignore the hierarchy and use that heuristic directly. Generating a relaxed model is useful only when an efficient algorithm accompanies it.
The rest of the paper is organized as follows: In the second section we provide background material on state-space search, the algorithm, and relaxed models. The aim of the third section is to motivate the need for our new result, by describing an instance from the recent literature to which it can be usefully applied. In the fourth section, we prove our main result (corresponding to (2) above). The fifth section, which concludes the paper, is devoted to an interpretation of the main result, including some remarks on the measure of cost (“largely dominates”) used to evaluate the comparative efficiency of state-space search algorithms.
2.1 State-Space Search
The state-space approach to problem-solving considers a problem as a quadruple, . is the set of possible states of the problem. is the set of operators, or transitions from state to state. is the one initial state of a problem instance, and is the set of goal states. Search problems can be represented as a state-space graph, where the states are nodes, and the operators are directed, weighted arcs between nodes (the weight associated with each operator, , is the cost of applying it, ). The problem consists of determining a sequence of operators, that, when applied to , yields a state in . Such a sequence is called a solution path (or solution), with length and cost . A solution with minimum cost is called optimal.
Solutions to a given problem may be found by brute force search over the state-space. However, as the sizes of the state-spaces of most problems are prohibitively large, the only hope of finding an optimal solution in reasonable time is to use an intelligent method of guiding a search through the state-space. Typically, such methods take the form of branch-and-bound, wherein partial solutions (equivalently, classes of solutions) are enumerated (“branch”), and perhaps eliminated from future consideration by an estimate of solution cost (“bound”). One such method, the celebrated algorithm (originally in ; also ) orders the search by associating with each node two values: , the length of the shortest-path from the initial state to , and , an estimate of the length of the shortest-path from to any goal state (the actual length is ). In brief, is an ordered best-first search algorithm, that always examines the successors of the “most promising” node in the search tree, based on the evaluation function, . Thus the number of nodes expanded is a function of the heuristic function used.
The following definitions will simplify the discussion:
A heuristic function, , is said to be admissible if .
A heuristic function, , is said to be monotone if (where is a successor of ) (recall that is determined by ). Monotonicity implies admissibility .
terminates when it expands (examines) a goal state for the first time. If it uses an admissible heuristic, will have found the optimal path to this goal state, i.e., . As it orders the nodes by , with a monotone heuristic surely expands all nodes that satisfy , and possibly expands some of the nodes that satisfy , but expands none of the nodes that satisfy . The surely expanded nodes form a connected component in the state-space graph (containing the initial state), and those nodes, together with the possibly expanded nodes, form a larger connected component (containing both the initial and goal states).
2.3 Relaxed Models
As the accuracy of the heuristic function determines the time complexity (measured in node expansions) of , many researchers have attempted to automate the learning of heuristic functions. One proposed method of developing good heuristics is to “consult simplified models of the problem domain” [4, 5, 11, 13]. These simplified models are generated via constraint-deletion, i.e., ignoring selected constraints on the applicability of operators. Recently, there has been renewed interest in this and other uses of abstraction in problem-solving [9, 10, 12, 15], including a generalization of the notion of simplification  that will be described below.
Pearl  discusses the use of constraint-deletion to generate three known heuristics for the familiar Eight Puzzle problem. He formalizes the problem in terms of domain predicates, and then describes the single operator in the state-space – moving a tile from position to position – as follows:
preconditions : add list : delete list :
Removing preconditions or constraints for this operator creates a relaxed model of the problem. Deleting different sets of constraints yields different relaxed models.
Specifically, when constraints are removed from a problem, new edges and nodes are introduced into the state-space graph, yielding a “relaxed” state-space graph (a supergraph ). If the states and operators of the original problem, , are denoted by the set and the relation , then the relaxed problem, , consists of and as well as , and . Note that relaxed models may permit certain “speedup transformations,” such as factoring into independent subproblems : our main result underscores the importance of discovering such optimizations, as discussed in Section 5.
An optimal (i.e., shortest) path between any two given states in cannot be longer than the shortest-path between the same two states in , since all paths in are also paths in . Thus the length of such a relaxed solution is a lower bound on the length of an optimal solution to the original problem, and this information can be used as an admissible heuristic to speed up a branch-and-bound search algorithm (e.g., ). Note that, while our results apply to “traditional” [4, 5, 11, 13, 16, 17] relaxed problems obtained by constraint-deletion, other transformations, more general than constraint-deletion, may be used to produce admissible heuristics. Prieditis  gives conditions for transformations that guarantee the generation of problems whose solution can be used to compute an admissible heuristic. He calls these transformations abstracting and the resulting problems abstracted problems. Our results do not necessarily apply to abstracted problems.
Of course, the length of a solution to a further relaxed model provides a lower bound on the length of a solution to . Thus, if no efficient algorithm222To be precise, we describe as efficient algorithms any techniques that perform no search in a relaxed model. is known for solving , we may use to provide an admissible heuristic for an search of . We may conceive of a search algorithm (Figure 2) that performs search at many levels of a relaxed model hierarchy, each level providing heuristic estimates for the level above.
3 A Motivating Example
The following example of the application of our result concerns a heuristic for the Eight Puzzle. Please refer to the previous section for a formalization of the puzzle in terms of domain predicates and operators. The example is adapted from .
In the Eight Puzzle, the tile positions form a bipartite graph of positions – each move shifts the blank from one side of the bipartite graph to the other. If one colors the puzzle like a checkerboard and connects adjacent squares by edges, the red squares will form one side of the bipartite graph and the black squares the other. In the Eight Puzzle problem, the blank is constrained to move to only a small subset of the other side of the graph (i.e., the adjacent positions). One may relax this constraint by allowing the blank to move to any of the positions in the other side of the bipartite graph.
This Checkerboard Relaxed Adjacency (Check-RA) relaxed model may be thought of as being somewhere between the original problem and the Relaxed Adjacency (RA) model , because one has deleted only a part of the adjacency requirement (the RA model results from deleting the adjacency requirement in moving tiles into the blank position). In the Check-RA model, one can think of any given tile position as being “adjacent” to half of the other positions. To formalize this, we define two new STRIPS-like predicates (where denotes exclusive-or):
: is a red position :
and change the preconditions for MOVE:
preconditions : add list : delete list :
In short, if a black position is blank (i.e., clear), only tiles in red positions may move into it, and vice versa. There is no known algorithm to solve this relaxed model, except for one that searches the corresponding state space. However, other relaxed problems for the Eight Puzzle are more relaxed than Check-RA, e.g., Relaxed Adjacency (RA), for which an efficient solution algorithm is known [4, 6]. We can therefore solve the Check-RA problem by using with a heuristic computed by solving the RA problem. A natural question to ask is whether it is more efficient to use the heuristic from RA indirectly (to solve Check-RA as just described) rather than to use it directly to solve the Eight Puzzle. Clearly, the Check-RA heuristic is never smaller than the RA heuristic and can therefore prevent from expanding some nodes that it would expand using the RA heuristic. On the other hand, a secondary search procedure (using ) must be carried out to compute the Check-RA heuristic. Is it possible to predict which one of the two uses of the RA heuristic is the most efficient? We will answer this question in the affirmative.
4 The Limitations of Abstraction
We prove a result that allows us to collapse arbitrary relaxed model hierarchies to two levels.
Consider that some constraints of a base-level problem have been deleted to yield a relaxed model , whose solutions provide a heuristic estimate for the base-level problem. Unfortunately, there may be no efficient algorithmic solution to this relaxed model , and to compute one may have to perform a search in using . We will denote this algorithm by . Consider also the algorithm , that uses directly as a heuristic for searching . The two algorithms are depicted in Figure 1.
Note that performs two distinct heuristic searches, one in , and one in . performs only the search in . We will prove that largely dominates , i.e., all nodes surely expanded by are also surely expanded by , and all nodes possibly expanded by are also possibly expanded by .
If a node is surely expanded by algorithm , then node is surely expanded by algorithm .
Assume not. Consider a node surely expanded by , but not surely expanded by algorithm . Consider an ancestor of that is the first node on a cheapest path from to that is not surely expanded by :
denotes the cost of the cheapest path from to . By assumption, computation of does not surely expand :
denotes the cost of the cheapest path from to in the relaxed model . Clearly,
But by assumption, is surely expanded by :
which contradicts inequality 9.
If a node is possibly expanded by algorithm , then node is possibly expanded by algorithm .
Assume not. Consider a node possibly expanded by , but not possibly expanded by algorithm . Consider an ancestor of that is the first node on a cheapest path from to that is not possibly expanded by :
By assumption, computation of does not possibly expand :
But by assumption, is possibly expanded by :
which contradicts inequality 16.
5 Interpretation of Results
As anticipated at the beginning of the previous section, Theorems 1 and 2 imply that largely dominates . This is precisely the sense in which is said to be optimal [13, p. 85]: given a monotone heuristic , largely dominates every admissible algorithm that uses . The relation misses some important distinctions. For example, greedy algorithms, that may operate in polynomial time, can be viewed as heuristic searches with optimal tie-breaking rules to choose among the possibly expanded nodes. Large domination does not imply that all nodes expanded by will necessarily be expanded by . In fact there are some tie-breaking rules (for ordering expansion of nodes with equal values) for which will expand fewer nodes than . Traditionally, however, large domination is equated with superior efficiency, as the number of nodes for which is assumed to be small: for example, if either or are continuous valued functions [13, p. 85] or is non-pathological (as defined in [2, p. 522]), such ties are rare or non-existent.
The theorems extend results of Valtorta , who proved that brute-force, uniform-cost search (Dijkstra’s algorithm) is never less efficient than a heuristic search that relies on uniform-cost search of relaxed models for the computation of heuristics. This follows from our results by considering the special case in which is uniformly zero, yielding uniform-cost search. However, Valtorta’s original result is for a sharper notion of “more efficient than”: the number of node expansions, rather than large domination. The reason for this discrepancy lies in the fact that Dijkstra’s algorithm expands all and only the nodes for which , while with monotone heuristics may expand some nodes for which : for Dijkstra’s algorithm, the set of possibly expanded nodes coincides with the set of surely expanded nodes. Analogously, while Dijkstra’s algorithm is optimal with respect to the measure “number of node expansions” within the class of blind forward unidirectional algorithms , is optimal in that it largely dominates all informed algorithms that use consistent heuristics. “Large domination” is a weaker form of optimality than “fewer node expansions,” as is explained by Dechter and Pearl in , and especially in .
We observe that with monotone heuristics, just like Dijkstra’s algorithm, never expands the same node twice. Since all heuristics computed by solving a relaxed problem are monotone [13, 17], we can use “the number of expanded nodes” and “the number of node expansions” interchangeably. Therefore our results hold for the cost measure “number of node expansions,” as well as for the cost measure “number of expanded nodes.”
While Valtorta  showed that there is no advantage to using in certain situations, we show that, in certain situations, there is no advantage to using a relaxed-model hierarchy within an algorithm. Theorems 1 and 2 indicate that in reducing the cost of finding optimal solutions, a heuristic is effective only when it is computable by an efficient algorithm – the cost of computing a heuristic function by a secondary search exceeds the savings offered by the heuristic. This possibility was recognized in the design of ABSOLVER , where “speedup transformations” are used, as well as in other recent work . Our result shows (in a precise and general way) that speedup transformations are not only useful, but necessary.
Some readers may object to our use of the word “search” in the preceding paragraph, since, after all, a search in a much reduced state space is sufficient to turn an inefficient procedure into a practical, efficient one. In the spirit of [5, 17] we use the word search to signify that a relaxed problem is solved by search in its state space (called underlying graph in  and skeleton in ) and view the construction of a reduced state space as an instance of a “speedup transformation.” A way of reducing the state space for a (relaxed) problem is to factor (or decompose) it into independent subproblems. Decomposability is possible when the goal state of a state-space search problem can be achieved by solving its subgoals independently. As discussed in [12, 13], the state-space for each independent subproblem can be so much smaller than the original state-space that computation of the heuristic by searching the factored problem is computationally advantageous over blind search. We illustrate these considerations with an example adapted from .
Mostow and Prieditis describe a heuristic (called X-Y) that is never smaller than the well-known Manhattan Distance (MD) heuristic. It corresponds to a relaxed problem in which “a horizontal move is allowed only into a column containing the blank, and a vertical move is allowed only into the row containing the blank. X-Y is therefore more accurate than Manhattan Distance, which ignores the blank completely.” This relaxed problem can be obtained from a representation of the tiles in the Cartesian plane333We use the traditional goal state with the blank in the middle and tiles arranged clockwise around the border, starting with tile 1 in the top left corner. Mandrioli et al.  used the Cartesian representation for the Eight Puzzle in the first published examples of relaxed problems, but they did not provide the X-Y heuristic as an example., by dropping all information about the position of the blank in the Y coordinate (YLOCB) from moves in the X coordinate (XMove) and all information about the position of the blank in the X coordinate (XLOCB) from moves in the Y coordinate (YMove). The X-Y relaxed problem can be decomposed into two subproblems (or factors), corresponding to each coordinate. The goal of the first subproblem is to place each tile in its correct column, while the goal of the second subproblem is to place each tile in its correct row. Since we dropped from the list of preconditions of the move operator in a coordinate all information about the position of the blank in the other coordinate, we obtain two independent subproblems, as described in Figure 3.
No algorithm is known to solve either of the independent subproblems without searching the corresponding state-space. However, since MD is a relaxed problem of X-Y for which a very efficient solution algorithm is known, it is possible to solve X-Y by using with a heuristic computed by solving MD. Now consider the cost of computing X-Y as just outlined. This computation involves search. However, the search is in the reduced state-spaces of two independent subproblems of the X-Y relaxed problem, and therefore the results described in this paper are not applicable. In other words, the theorems we have proved in this paper are not strong enough to predict when a heuristic derived from relaxation and factoring is cost effective, and it is not possible to conclude whether the computation of the X-Y heuristic is cost effective simply on the basis of our results. In fact, it has been determined empirically that computation of the X-Y heuristic as described is not cost effective. Mostow and Prieditis report that “unfortunately, even with such guidance [i.e., the heuristic computed by solving MD] the overall search time turns out to be about six times slower than using Manhattan distance alone.” In other words, direct use of MD is more efficient than indirect use of MD to compute X-Y. This shows that heuristics computed by searching factors of relaxed problems need not be cost effective. On the other hand, the Box Distance heuristic of the Rooms World problem, when computed by search of factors of a relaxed problem, is cost effective . This shows that heuristics computed by searching factors of relaxed problems can be cost effective. Additional work is needed to find conditions under which heuristics computed by searching factors of relaxed problems are cost effective, and conditions under which they are not.
In conclusion, we have shown that relaxed model hierarchies (chains of successively more abstract problems) collapse to only two levels – the base-level, and the least-relaxed model for which algorithmic solutions are known. Exploiting the power of abstraction will require either the development of techniques for synthesizing efficient algorithms from relaxed-model problem-descriptions, or the development of new problem relaxation transformations that are not subject to the fundamental limitations of the theorems presented here [8, 12, 14]. The most promising example of these approaches is the ABSOLVER system [12, 14], which exploits subproblem factoring transformations to reduce the cost of search in relaxation hierarchies.
We would like to thank two anonymous referees for their insightful and detailed comments on earlier drafts of this paper and Armand Prieditis for several interesting conversations. Marco Valtorta thanks Marco Somalvico and Giovanni Guida for introducing him to research on auxiliary problems and Jack Mostow for a conversation in which he shared some (at the time) unpublished details of the ABSOLVER system.
-  R. Dechter and J. Pearl. The Optimality of Revisited. In Proceedings of the Third National Conference on Artificial Intelligence, Washington, 1983, 95–99.
-  R. Dechter and J. Pearl. Generalized Best-First Search Strategies and the Optimality of . Journal of the Association for Computing Machinery, 32, 3 (July 1985), 506–536.
-  R. Dechter and J. Pearl. The Optimality of . In Search in Artificial Intelligence, eds. L. Kanal and V. Kumar, Springer-Verlag, New York, 1988, 166–199.
-  J. Gaschnig. A Problem Similarity Approach to Devising Heuristics: First Results. In Proceedings of the Sixth International Joint Conference on Artificial Intelligence, Tokyo, 1979, 301–307.
-  G. Guida and M. Somalvico. A Method for Computing Heuristics in Problem Solving. Information Sciences, 19:251–259, 1979.
-  O. Hansson, A. Mayer, and M. Yung. Relaxed Models Yield Powerful Admissible Heuristics. Information Sciences, to appear.
-  P. Hart, N. Nilsson, and B. Raphael. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. IEEE Transactions on Systems Science and Cybernetics, 2:100–107, 1968.
-  D. Kibler. Generation of Heuristics by Transforming the Problem Representation. Technical Report TR-85-20, Information and Computer Science Department, University of California at Irvine, July 1985.
-  C. A. Knoblock. Automatically Generating Abstractions for Planning. In Proceedings of the International Workshop on Change of Representation and Inductive Bias, Philips Laboratories, Briarcliff Manor, New York, 1988.
-  R. E. Korf. Planning as Search: a Quantitative Approach. Artificial Intelligence, 33:65–88, 1987.
-  D. Mandrioli, A. Sangiovanni-Vincentelli, and M. Somalvico. Toward a Theory of Problem Solving. In Topics in Artificial Intelligence, ed. A. Marzollo, Springer-Verlag, Vienna, 1976, 48–167.
-  J. Mostow and A. E. Prieditis. Discovering Admissible Heuristics by Abstracting and Optimizing: a Transformational Approach. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, Detroit, 1989, 701–707.
-  J. Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, Reading, MA, 1984.
-  A. Prieditis. Machine Discovery of Effective Admissible Heuristics. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence, Sidney, 1991, 720–725.
-  J. Tenenberg. Abstraction in Planning. Ph.D. Thesis, University of Rochester, 1988.
-  M. Valtorta. Un Contributo alla Teoria della Risoluzione dei Problemi: Rappresentazione Semantica, Proprietà Algebriche ed Algoritmi di Ricerca (in Italian). Tesi di Laurea, Istituto di Ingegneria Elettrotecnica ed Elettronica, Politecnico di Milano, Milan, 1980.
-  M. Valtorta. A Result on the Computational Complexity of Heuristic Estimates for the Algorithm. Information Sciences, 34:48–59, 1984.