Model-Lite Case-Based Planning

There is increasing awareness in the planning community that depending on complete models impedes the applicability of planning technology in many real world domains where the burden of specifying complete domain models is too high. In this paper, we consider a novel solution for this challenge that combines generative planning on incomplete domain models with a library of plan cases that are known to be correct. While this was arguably the original motivation for case-based planning, most existing case-based planners assume (and depend on) from-scratch planners that work on complete domain models. In contrast, our approach views the plan generated with respect to the incomplete model as a "skeletal plan" and augments it with directed mining of plan fragments from library cases. We will present the details of our approach and present an empirical evaluation of our method in comparison to a state-of-the-art case-based planner that depends on complete domain models.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/26/2019

Learning Action Models from Disordered and Noisy Plan Traces

There is increasing awareness in the planning community that the burden ...
04/27/2011

Synthesizing Robust Plans under Incomplete Domain Models

Most current planners assume complete domain models and focus on generat...
11/18/2020

Domain Concretization from Examples: Addressing Missing Domain Knowledge via Robust Planning

The assumption of complete domain knowledge is not warranted for robot p...
11/01/1997

Storing and Indexing Plan Derivations through Explanation-based Analysis of Retrieval Failures

Case-Based Planning (CBP) provides a way of scaling up domain-independen...
04/19/2018

An Integrated Development Environment for Planning Domain Modeling

In order to make the task, description of planning domains and problems,...
10/15/2020

On the Importance of Domain Model Configuration for Automated Planning Engines

The development of domain-independent planners within the AI Planning co...
12/04/2013

Case-Based Merging Techniques in OAKPLAN

Case-based planning can take advantage of former problem-solving experie...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Most work in planning assumes that complete domain models are given as input in order to synthesize plans. However, there is increasing awareness that building domain models at any level of completeness presents steep challenges for domain creators. Indeed, recent work in web-service composition (c.f. [Bertoli, Pistore, and Traverso2010, Hoffmann, Bertoli, and Pistore2007]) and work-flow management (c.f. [Blythe, Deelman, and Gil2004]) suggest that dependence on complete models can well be the real bottle-neck inhibiting applications of current planning technology.

There has thus been interest in the so-called “model-lite” planning approaches (c.f. [Kambhampati2007]) that aim to synthesize plans even in the presence of incomplete domain models. The premise here is that while complete models cannot be guaranteed, it is often possible for the domain experts to put together reasonable but incomplete models. The challenge then is to work with these incomplete domain models, and yet produce plans that have a high chance of success with respect to the “complete” (but unknown) domain model. This is only possible if the planner has access to additional sources of knowledge besides the incomplete domain model.

Interestingly, one of the original motivations for case-based planning was also the realization that in many domains complete domain models are not available. Over years however, case-based planning systems deviated from this motivation and focused instead on “plan reuse” where the motivation is to improve the performance of a planner operating with a complete domain model. In this paper, we return to the original motivation by considering “model-lite case-based planning.” In particular, we consider plan synthesis when the planner has an incomplete domain theory, but has access to a library of plans that “worked” in the past. This plan library can thus be seen as providing additional knowledge of the domain over and above the incomplete domain theory.

Our task is to effectively bring to bear this additional knowledge on plan synthesis to improve the correctness of the plans generated. We take a two stage process. First, we use the incomplete domain model to synthesize a “skeletal” plan. Next, with the skeletal plan in hand, we “mine” the case library for fragments of plans that can be spliced into the skeletal plan to increase its correctness. The plan improved this way is returned as the best-guess solution to the original problem. We will describe the details of our framework, called ML-CBP and present a systematic empirical evaluation of its effectiveness. We compare the effectiveness of our model-lite case-based planner with OAKPlan [Serina2010], the current state-of-the-art model-complete case-based planner.

We organize the paper as follows. We first review related work, and then present the formal details of our framework. After that, we give a detailed description of ML-CBP algorithm. Finally, we evaluate ML-CBP in three planning domains, and compare its performance to OAKPlan.

Related Work

As the title implies, our work is related both to case-based planning and model-lite planning. As mentioned in the introduction, our work is most similar to the spirit of original case-based planning systems such as CHEF [Hammond1989] and PLEXUS [Alterman1986], which viewed the case library as an extensional representation of the domain knowledge. CHEF’s use of case modification rules, for example, serves a similar purpose as our use of incomplete domain models. Our work however differs from CHEF in two ways. First, unlike us, CHEF assumes access to a (more) complete domain model during its debugging stage. Second, CHEF tries to adapt a specific case to the problem at hand, while our work expands a skeletal plan with relevant plan fragments mined from multiple library plans. The post-CHEF case-based planning work largely focused on having access to a from-scratch planner operating on complete domain models (c.f. [Kambhampati and Hendler1992, Veloso et al.1995]). The most recent of this line of work is OAKPlan [Serina2010], which we compare against.

The recent focus on planning with incomplete domain models originated with the work on “model-lite planning” [Kambhampati2007]. Approaches for model-lite planning must either consider auxiliary knowledge sources or depend on long-term learning. While our work views the case-library as the auxiliary knowledge source, work by Nguyen et al. [Nguyen, Kambhampati, and Do2010] and Weber et. al. [Bryce and Weber2011] assume that domain writers are able to provide annotations about missing preconditions and effects. It would be interesting to see if these techniques can be combined with ours. One interesting question, for example, is whether the case library can be compiled over time into such possible precondition/effect annotations.

A third strand of research that is also related to our work is that of action model learning. Work such as [Yang, Wu, and Jiang2007, Zhuo et al.2010, Zettlemoyer, Pasula, and Kaelbling2005] focuses on learning action models directly from observed (or pre-specified) plan traces. The connection between this strand of work and our work can be seen in terms of the familiar up-front vs. demand-driven knowledge transfer: the learning methods attempt to condense the case library directly into STRIPS models before using it in planning, while we transfer knowledge from cases on a per-problem basis. Finally, in contrast, work such as [Amir2005]

, as well as much of the reinforcement learning work

[Sutton and Barto1998] focuses on learning models from trial-and-error execution111This latter has to in general be limited to ergodic domains. This too can be complementary to our work in that execution failures can be viewed as opportunities to augment the case-library (c.f. [Ihrig and Kambhampati1997]).

Problem Definition

Figure 1: An input example of the ML-CBP algorithm for the blocks domain

A planning problem can be described as a triple , where is an initial state, is a goal, and is defined by , where is a set of states, is a set of action models, and is a transition function defined by . A solution to a planning problem is an action sequence (or a plan) denoted by , where is an action. An action model is defined as , where is an action name with zero or more parameters, is a precondition list specifying the condition under which can be applied, is an adding list and is a deleting list indicating the effects of . Notice that we focus on the STRIPS action model description [Fikes and Nilsson1971] in this paper. An action model is called “incomplete” when there are predicates missing in , , or . A set of incomplete action models is denoted by . An incomplete planning problem is denoted by . A plan example is composed by an initial state, a goal and an action sequence that transits the initial state and the goal, i.e., , where is the initial state, is an action, and is the goal. We denote a set of plan examples by .

Our planning problem in this paper is defined by: given as input a quadruple , where is an initial state, and a goal, as described above, is a set of incomplete action models, and is a plan example set, our ML-CBP algorithm outputs a solution that transits and .

An example input of our planning problem in blocks222http://www.cs.toronto.edu/aips2000/ domain is shown in Figure 1, which is composed of three parts: incomplete action models (Figure 1(a)), an initial state and a goal (Figure 1(b)), and a plan example set (Figure 1(c)). In Figure 1(a), the dark parts indicate the missing predicates. In Figure 1(c), and are two plan examples, where initial states and goals are bracketed. An example output is a solution to the planning problem given in Figure 1, i.e., “unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C)”.

Our Ml-Cbp Algorithm

Input: , and a set of plan examples .
Output: the plan for solving the problem.

1:  generate a set of causal pairs with ;
2:  build a set of plan fragments :      =build_fragments(, );
3:  mine a set of frequent plan fragments :      =freq_mining();
4:  ;
5:  if concat_frag(, , , ) = true then
6:     return  ;
7:  else
8:     return  NULL;
9:  end if
Algorithm 1 Our ML-CBP algorithm

An overview of our ML-CBP algorithm can be found in Algorithm 1. We first generate a skeletal plan, presented by a set of causal pairs, based on . After that, we build a set of plan fragments based on plan examples and causal pairs, and then mine a set of frequent plan fragments with a specific threshold. These frequent fragments will be integrated together to form the final solution based on causal pairs. Next, we describe each step in detail.

Generate causal pairs

Given the initial state and goal , we generate a set of causal pairs . A causal pair is an action pair that provides one or more conditions for . The procedure to generate is shown in Algorithm 2.

input: initial state , goal , incomplete action models . output: a set of causal pairs .

1:  ;
2:  for each proposition  do
3:     generate a plan, denoted by a set of causal pairs , to transit to ;
4:     ;
5:  end for
6:  return  ;
Algorithm 2 Generate causal pairs

Note that, in step 3 of Algorithm 2, is an empty set if cannot be achieved. In other words, skeletal plans may not provide any guidance for some top level goals. Actions in causal pairs is viewed as a set of landmarks for helping construct the final solution, as will be seen in the coming sections. We show an example of the generated causal pairs in Example 1.
Example 1: As an example, causal pairs generated for the planning problem given in Figure 1 is {pickup(B),stack(B A), unstack(C A), stack(C B), pickup(D), stack(D C)}.

Creating Plan Fragments

In the procedure “build_fragments” of Algorithm 1, we would like to build a set of plan fragments by building mappings between “objects” in of and of a plan example . In other words, a mapping, denoted by , is composed of a set of pairs , where is an object (i.e., an instantiated parameter) from plan example , and is an object from . We can apply mapping to a plan example , whose result is denoted by , such that and share common propositions, likewise for and . We measure a mapping by the number of propositions shared by initial states and , and goals and . We denote the number of shared propositions by , i.e.,

An example to demonstrate how to calculate is given as follows.
Example 2: In Figure 1, a possible mapping between and of is . The result of applying to is ={(clear C)(clear A)(clear B)(clear D)(ontable C)(ontable A)(ontable B)(ontable D)(handempty)}. Likewise, we can calculate the result of applying to . It is not difficult to see that .

It is possible that there are many different mappings between and . We choose a mapping with the largest to maximally map to , i.e., . We assume that all propositions are “equally” important in describing states. The more common propositions and share, the more “similar” they are. Note that mappings between objects of the same types are subject to the constraint that they should have the set of “features” in the domain, defined by unary predicates of the corresponding types. For instance, “b3” can be mapped to “B” in our running example since both of them are the two blocks having the same features “on table” and “clear” in the two problems. In practice, we find that this requirement significantly reduces the amount of mappings that need to be considered, actually allowing us to find in a reasonable running time.

We apply to to get a new plan example , which is denoted by . We scan the action sequence from to to get subsequences that satisfies the constraint that all the objects in the subsequences should be in . We call these subsequences plan fragments. We can build a set of plan fragments using plan examples .
Example 3: In Example 2, we find that . Thus, is “pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C)”, which is a plan fragment. For in Figure 1, is . Thus, is “unstack(B C) putdown(B) unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B)”, which is also a plan fragment.

Mining Frequent Plan Fragments

In step 3 of Algorithm 1, we aim at building a set of frequent plan fragments using the procedure “freq_mining”. Given that there will not be any function perfectly mapping the two planning problems, our intuition is that a plan fragment occurring multiple times in different plan examples increases our confidence on both the quality of the mapping between objects involved and the success of reusing the fragment as part of a solution plan for the problem being solved. We thus borrow the notion of frequent patterns defined in [Zaki2001, Pei et al.2004] to use for mining our frequent plan fragments. The problem of mining sequential patterns can be stated as follows. Let be a set of items. We call a subset an itemset and the size of . A sequence is an ordered list of itemsets, denoted by , where is an itemset. The size of a sequence is the number of itemsets in the sequence, i.e., . The length of a sequence is defined as . A sequence is a subsequence of another sequence if there exist integers such that , denoted by . A sequence database is a set of tuples , where is a and is a sequence. A tuple is said to contain a sequence , if is a subsequence of . The support of a sequence in a sequence database is the number of tuples in the database containing , i.e.,

Given a positive integer as the support threshold, we call a frequent sequence if . Given a sequence database and the support threshold, frequent sequential pattern mining problem is to find the complete set of sequential patterns whose support is larger than the threshold.

We view each action of plan fragments as an itemset, and a plan fragment as a sequence, which suggests plan fragments can be viewed as a sequence database. Note that in our case an itemset has only one element, and the indices of those in the subsequence are continuous. We fix a threshold and use the SPADE algorithm [Zaki2001] to mine a set of frequent patterns. There are many frequent patterns which are subsequences of other frequent patterns. We eliminate these “subsequences” and keep the ”maximal” patterns, i.e., those with the longest length, as the final set of frequent plan fragments .
Example 4: In Example 3, if we set to be 2 and 1, the results are shown below (frequent plan fragments are partitioned by commas):

plan fragments:
1. pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C)
2. unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B)
frequent plan fragments ():
{pickup(B) stack(B A) pickup(C) stack(C B)}
frequent plan fragments ():
{pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C), unstack(B C)putdown(B)unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B)}

Note that the following frequent patterns are eliminated when (likewise when ): {pickup(B), stack(B A), pickup(C), stack(C B), pickup(B) stack(B A), stack(B A) pickup(C), pickup(C) stack(C B), pickup(B) stack(B A) pickup(C), stack(B A) pickup(C) stack(C B)}.

Generating Final Solution

In steps 4-6 of Algorithm 1, we generate the final solution using frequent plan fragments generated by step 3. We address the procedure concat_frag by Algorithm 3. In Algorithm 3, we scan each causal pair in and each frequent plan fragment in ; if a plan fragment contains an action (or both actions) of a causal pair, we append the plan fragment to the final solution and remove all the causal pairs that are satisfied by the new ; and then recursively call the procedure concat_frag until the solution is found, i.e., , or no solution is found, i.e., the procedure returns false ().

input: a plan , a set of causal pairs , a set of frequent plan fragments , and an incomplete problem;
output: true or false.

1:  if  then
2:     ;
3:     ;
4:     if  is executable based on  then
5:        return  true;
6:     else
7:        return  false;
8:     end if
9:  end if
10:  for each pair and each  do
11:     if () and share(, ) =true then
12:        =append(, );
13:        =removelinks(, );
14:        ;
15:        if concat_frag(, , ,) =true then
16:           return  true;
17:        end if
18:     end if
19:  end for
20:  return  false
Algorithm 3 concat_frag(, , ,);

In step 2 of Algorithm 3, we repeatedly remove the first action of that cannot be applied in . In step 3 of Algorithm 3, we repeatedly remove the last action of that deletes propositions of goal . After steps 2 and 3, the remainder plan can be executed from to using , then the algorithm returns true, otherwise, returns false. In step 11 of Algorithm 3, the procedure share returns true if is empty or and share a common action subsequence. That is to say, two plan fragments are concatenated only if they have some sort of connection, which is indicated by common action subsequence. In step 12 of Algorithm 3, we concatenate and based on their maximal common action subsequence, which is viewed as the strongest connection between them. Note that the common action subsequence should start from the beginning of OR end at the end of . In other words, can be concatenated at the end of or at the beginning, as is shown in Figure 2. In step 13 of Algorithm 3, the procedure removelinks remove all causal pairs in that are “satisfied” by . The result is denoted by . Example 5 demonstrates how to generate final solutions.

Figure 2: (a). is concatenated at the end of ; (b). is concatenated at the beginning of ; Part (I) is the maximal action subsequence; Part (II) is the action subsequence that is different from .

Example 5: In Example 4, we have two frequent plan fragments by setting . We concatenate these two fragments together. The result is shown as follows. The boldfaced part is the actions shared by fragments 1 an d 2. The concatenating result is shown in the third row. After concatenating, we can see that all the causal pairs in is satisfied and will be removed according to step 13 of Algorithm 3. Furthermore, according to steps 2 and 3 of Algorithm 3, the first two actions are removed since they cannot be applied in , and no action is removed at the end of the plan since no action deletes propositions of . The result is shown in the fourth row. The result is executable from to , which means it is the final solution. fragment 1: pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C) fragment 2: unstack(B C) putdown(B) unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B) result: unstack(B C) putdown(B) unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C) solution: unstack(C A) putdown(C) pickup(B) stack(B A) pickup(C) stack(C B) pickup(D) stack(D C)

Experiments

Dataset and Criterion

We evaluate our ML-CBP algorithm in three planning domains: blocks, driverlog333http://planning.cis.strath.ac.uk/competition/ and depots. In each domain, we generate from 40 to 200 plan examples using a classical planner such as FF planner444http://members.deri.at/joergh/ff.html and solve 100 new planning problems based on different percentages of completeness of domain models. For example, we use to indicate one predicate is missing among five predicates of the domain.

We define the accuracy of our ML-CBP algorithm as the percentage of correctly solved planning problems. Specifically, we exploit ML-CBP to generate a solution to a planning problem, and execute the solution from the initial state to the goal. If the solution can be successfully executed starting from the initial state, and the goal is achieved, then the number of correctly solved problems is increased by one. The accuracy, denoted by , can be computed by , where is the number of correctly solved problems, and is the number of total testing problems. Note that when testing the accuracy of ML-CBP, we assume that we have complete domain models available for executing generated solutions. It is easy to see that the larger the accuracy is, the better our ML-CBP algorithm functions.

Experimental Results

We would like to evaluate ML-CBP in the following aspects: (1) the change of accuracies with respect to different number of plan examples; (2) the change of accuracies with respect to different percentages of completeness; (3) the change of accuracies with respect to different support threshold ; (4) the average of plan lengths; (5) the running time of ML-CBP. We compared our ML-CBP algorithm with the state-of-the-art CBP (Case Based Planning) system OAKPlan [Serina2010]. OAKPlan requires a complete domain model and a case library as input for a new planning problem. To make OAKPlan be comparable with our ML-CBP algorithm, we fed an incomplete domain model to OAKPlan, which was the same as the input of ML-CBP, instead of an complete domain model.

Varying the number of plan examples

We would like to test the change of the accuracy when the number of plan examples increasing. We set the percentage of completeness as 60%, and the threshold as 15. We varied the number of plan examples from 40 to 200 and run ML-CBP to solve 100 planning problems. We calculated the accuracy for each case. The result is shown in Figure 3.


Figure 3: Comparison between ML-CBP and OAKPlan with respect to different number of plan examples.

From Figure 3, we found that both accuracies of ML-CBP and OAKPlan generally became larger when the number of plan examples increased. This is consistent with our intuition, since there is more knowledge to be used when plan examples become larger. On the other hand, we also found that ML-CBP generally had higher accuracy than OAKPlan in all the three domains. This is because ML-CBP exploits the information of incomplete domain models to mine multiple high quality plan fragments, i.e., ML-CBP integrates the knowledge from both incomplete domain models and plan examples, which may help each other, to attain the final solution. In contrast, OAKPlan first retrieves a case, and then adapts the case using the inputted incomplete domain model, which may fail to make use of valuable information from other cases (or plan fragments) when adapting the case.

By observation, we found that the accuracy of ML-CBP was no less than 0.8 when the number of plan examples was more than 160.

Varying the percentage of completeness

To test the change of accuracies with respect to different degrees of completeness, we varied the percentage of completeness from 20% to 100%, and ran ML-CBP with 200 plan examples by setting . We also compared the accuracy with OAKPlan. The result is shown in Figure 4.


Figure 4: Comparison between ML-CBP and OAKPlan with respect to different percentage of completeness.

We found both accuracies of ML-CBP and OAKPlan increased when the percentage of completeness increased, due to more information provided when the percentage increasing. When the percentage is 100%, both ML-CBP and OAKPlan can solve all the solvable planning problems successfully. Similar to Figure 3, ML-CBP functions better than OAKPlan. The reason is similar to Figure 3, i.e., simultaneously exploiting both knowledge from incomplete domain models and plan examples could be helpful.

By observing all three domains in Figure 4, we found that ML-CBP functioned much better when the percentage was smaller. This indicates that exploiting multiple plan fragments, as ML-CBP does, plays a more important role when the percentage is smaller. OAKPlan does not consider this factor, i.e., it still retrieves only one case.

Average of plan length

We calculated an average of plan length for all problems successfully solved by ML-CBP when was 15, the percentage of completeness was 60%, and 200 plan examples were used. As a baseline, we exploited FF to solve the same problems using the corresponding complete domain models and calculate an average of their plan length. The result is shown in Table 1.

domains blocks driverlog depots
ML-CBP 46.8 83.4 95.3
FF 35.2 79.2 96.7
Table 1: Average of plan length

From Table 1, we found that the plan length of ML-CBP was larger than FF in some cases, such as blocks and driverlog. However, it was also possible that ML-CBP had shorter plans than FF (e.g., depots), since high quality plan fragments could help acquire shorter plans.

Varying the support threshold

We tested different support thresholds to see how they affected the accuracy. We set the completeness to be 60%. The result is shown in Table 2. The bold parts indicate the highest accuracies. We found that the threshold could not be too high or too low, as was shown in domains blocks and driverlog. A high threshold may incur false negative, i.e., “good” plan fragments are excluded when mining frequent plan fragments in step 3 of Algorithm 1. In contrast, a low threshold may incur false positive, i.e., “bad” plan fragments are introduced. Both of these two cases may reduce the accuracy. We can see that the best choice for the threshold could be 15 (the accuracies of and are close in depots).

threshold blocks driverlog depots
0.80 0.78 0.73
0.88 0.84 0.79
0.83 0.75 0.80
Table 2: Accuracy with respect to different thresholds.

The running time

We show the average CPU time of our ML-CBP algorithm over 100 planning problems with respect to different number of plan examples in Figure 5. As can be seen from the figure, the running time increases polynomially with the number of input plan traces. This can be verified by fitting the relationship between the number of plan examples and the running time to a performance curve with a polynomial of order 2 or 3. For example, the fit polynomial for blocks is .


Figure 5: The running time of our ML-CBP algorithm

Conclusion

In this paper, we presented a system called ML-CBP for doing model-lite case-based planning. ML-CBP is able to integrate knowledge from both incomplete domain models and a library of plan examples to produce solutions to new planning problems. With the incomplete domain models, we first generate a skeletal plan using of-the-shelf planners, and then mine sequential information from plan examples to finally generate solutions. Our experiments show that ML-CBP is effective in three benchmark domains compared to case-based planners that rely on complete domain models. Our approach is thus well suited for scenarios where the planner is limited to incomplete models of the domain, but does have access to a library of plans correct with respect to the complete (but unknown) domain theory. Our work can be seen as a contribution both to model-lite planning, which is interested in plan synthesis under incomplete domain models, and the original vision of case-based planning, which aimed to use a library of cases as an extensional representation of planning knowledge.

References

  • [Alterman1986] Alterman, R. 1986. An adaptive planner. In Proceedings of AAAI, 65–71.
  • [Amir2005] Amir, E. 2005. Learning partially observable deterministic action models. In Proceedings of IJCAI, 1433–1439.
  • [Bertoli, Pistore, and Traverso2010] Bertoli, P.; Pistore, M.; and Traverso, P. 2010. Automated composition of web services via planning in asynchronous domains. Artificial Intelligence Journal 174(3-4):316–361.
  • [Blythe, Deelman, and Gil2004] Blythe, J.; Deelman, E.; and Gil, Y. 2004. Automatically composedworkflows for grid environments. IEEE Intelligent Systems 19(4):16–23.
  • [Bryce and Weber2011] Bryce, D., and Weber, C. 2011. Planning and acting in incomplete domains. In Proceedings of ICAPS.
  • [Fikes and Nilsson1971] Fikes, R., and Nilsson, N. J. 1971. STRIPS: A new approach to the application of theorem proving to problem solving. Artificial Intelligence Journal 189–208.
  • [Hammond1989] Hammond, K. J. 1989. Case-Based Planning: Viewing Planning as a Memory Task. San Diego, CA: Academic Press.
  • [Hoffmann, Bertoli, and Pistore2007] Hoffmann, J.; Bertoli, P.; and Pistore, M. 2007. Web service composition as planning, revisited: In between background theoriesandinitial state uncertainty. In Proceedings of AAAI.
  • [Ihrig and Kambhampati1997] Ihrig, L. H., and Kambhampati, S. 1997. Storing and indexing plan derivations through explanation-based analysis of retrieval failures. Journal of Artificial Intelligence Research 7:161–198.
  • [Kambhampati and Hendler1992] Kambhampati, S., and Hendler, J. A. 1992. A validation-structure-based theory of plan modification and reuse. Artificial Intelligence Journal 55:193 C258.
  • [Kambhampati2007] Kambhampati, S. 2007. Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain theories. In Proceedings of AAAI.
  • [Nguyen, Kambhampati, and Do2010] Nguyen, T. A.; Kambhampati, S.; and Do, M. B. 2010. Assessing and generating robust plans with partial domain models. In ICAPS Workshop on Planning under Uncertainty.
  • [Pei et al.2004] Pei, J.; Han, J.; Mortazavi-Asl, B.; Wang, J.; Pinto, H.; Chen, Q.; Dayal, U.; and Hsu, M.-C. 2004. Mining sequential patterns by pattern-growth: The prefixspan approach. IEEE Transactions on Knowledge and Data Engineering 16(11):1424–1440.
  • [Serina2010] Serina, I. 2010. Kernel functions for case-based planning. Artificial Intelligence 174(16-17):1369–1406.
  • [Sutton and Barto1998] Sutton, R. S., and Barto, A. G. 1998. Reinforcement Learning: An Introduction. Cambridge, Massachusetts: MIT Press.
  • [Veloso et al.1995] Veloso, M.; Carbonell, J.; P rez, A.; Borrajo, D.; Fink, E.; and Blythe, J. 1995. Integrating planning and learning: The prodigy architecture. Journal of Experimental and Theoretical Artificial Intelligence 7(1).
  • [Yang, Wu, and Jiang2007] Yang, Q.; Wu, K.; and Jiang, Y. 2007. Learning action models from plan examples using weighted MAX-SAT. Artificial Intelligence Journal 171:107–143.
  • [Zaki2001] Zaki, M. J. 2001. Spade: An efficient algorithm for mining frequent sequences. machine learning 42:31–60.
  • [Zettlemoyer, Pasula, and Kaelbling2005] Zettlemoyer, L. S.; Pasula, H. M.; and Kaelbling, L. P. 2005. Learning planning rules in noisy stochastic worlds. In Proceedings of AAAI.
  • [Zhuo et al.2010] Zhuo, H. H.; Yang, Q.; Hu, D. H.; and Li, L. 2010. Learning complex action models with quantifiers and implications. Artificial Intelligence 174(18):1540 – 1569.