1 Introduction
General formalisms of action and change can provide a natural framework for the problem of planning. They can offer a high level of expressivity and a basis for the development of general purpose planning algorithms.
We study how one such formalism, the Language [Kakas & Miller1997b, Kakas & Miller1997a], can form a basis for planning. To do this we exploit the reformulation [Kakas, Miller, & Toni1999] of the Language into an argumentation framework and the associated proof theory offered by this reformulation. A simple extension of this argumentationbased proof theory with abduction forms the basis of planning algorithms within the framework of the Language .
In this paper we will be particularly interested in addressing the specific problem of planning under incomplete information. This amounts to planning in cases where some information is missing, as for example when we do not have full knowledge of the initial state of the problem. In general, we assume that this missing information cannot be “filled in” by additional actions in the plan as it may refer to properties that cannot be affected by any type of action in the theory or to an initial time before which no actions can be performed. Instead, the planner needs to be able to reason despite this incompleteness and construct plans where this luck of information does not matter for achieving the final goal.
We define a planner, call the Planner, which is able to solve this type of planning problems with incomplete information. It works by first generating a conditional plan based on one possible set of arguments in the corresponding argumentation theory of the planning domain. These plans are called weak plans and may not be successful under every possibility for the missing information. The planner then uses further argumentation reasoning to extend the weak plan to a safe plan which is able to achieve the planning goal irrespective of the particular status of the missing imformation.
Planning under incomplete information is a relatively new topic. In [Finzi, Pirri, & Reiter1999] this problem is called “Open World Planning” and is studied within the framework of the situation calculus. The incomplete information refers to the initial situation of the problem and a theorem prover is used to reason about properties at this situation. Other related work on planning within formal frameworks for reasoning about actions and change is [Levesque1996], which defines a notion of conditional plans, [Shanahan1997, Denecker, Missiaen, & Bruynooghe1992], with a formulation of abductive planning in the event calculus and [Dimopoulos, Nebel, & Koehler1997, Lifschitz1999]
, which study “answer set planning” within extended logic programming.
2 A Review of the Basic Language
The Language is really a collection of languages. The particular vocabulary of each language depends on the domain being represented, but always includes a set of fluent constants, a set of action constants, and a partially ordered set of timepoints. For this paper where we are interested in linear planning we will assume that is a total order. A fluent literal is either a fluent constant or its negation .
Domain descriptions in the Language are collections of statements of three kinds (where is an action constant, is a timepoint, is a fluent constant, is a fluent literal and is a set of fluent literals): tpropositions (“t” for “timepoint”), of the form ; hpropositions (“h” for “happens”), of the form ; cpropositions (“c” for “causes”), of the form or . When is empty, the cpropositions are written as “” and “”, resp.
The semantics of is based on simple definitions of interpretations, defining the truth value of tpropositions at each particular timepoint, and models. Briefly, (see [Kakas & Miller1997a, Kakas & Miller1997b] for more details) these are given as follows:

An interpretation is a mapping , where is the set of fluent constants and is the set of timepoints in . Given a set of fluent literals and a timepoint , an interpretation satisfies at iff for each fluent constant , , and for each fluent constant such that , .

Given a timepoint , a fluent constant and an interpretation , is an initiationpoint (terminationpoint resp.) for in relative to a domain description iff there is an action constant such that (i) contains both an hproposition happensat and a cproposition initiates (terminates, resp.) when , and (ii) satisfies at . Then, an interpretation is a model of a given domain description iff, for every fluent constant and timepoints :

If there is no initiation or terminationpoint for in relative to such that , then .

If is an initiationpoint for in relative to , and there is no terminationpoint for in relative to such that , then .

If is a terminationpoint for in relative to , and there is no initiationpoint for in relative to such that , then .

For all tpropositions holdsat in , , and for all tpropositions “ holdsat ” in , .


A domain description is consistent iff it has a model. Also, entails (written ) the tproposition (, resp.), iff for every model of , (, resp.).
Note that the tpropositions, in effect, are like “static” constraints that interpretations must satisfy in order to be deemed models. We can extend the language with ramification statements, called rpropositions, of the form , where is a fluent literal and is a set of fluent literals. These are also understood as constraints on the interpretations, but with the difference of being “universal”, i.e. applying to every time point. Formally, the definition of a model is extended with:


For all rpropositions , in , and for all timepoints , if satisfies at then satisfies at .

In addition, the complete formalization of ramification statements requires a suitable extension of the definitions of initiation and terminationpoint. The interested reader is refered to [Kakas & Miller1997a] for the details.
As an example,
consider the following simple “car engine domain” ,
with action constants TurnOn and
and fluents Running and Petrol:
(1)
(2)
(3)
(4)
It is easy to see, for example, that entails Running holdsat and that extended via the hproposition does not.
3 Planning with
The language with its explicit reference to actions as hpropositions in its basic ontology is naturally suited for the problem of planning. Let a goal be a set of tpropositions. Then, given a domain description and a goal , planning amounts to constructing a set of hpropositions such that entails .
In general, however, the extension of via the plan might be required to respect some given preconditions for the actions in . These preconditions can be represented by a new kind of statements, called ppropositions (“p” for “preconditions”), of the form needs , where is an action constant and is a nonempty set of fluent literals. Intuitively, the fluents in are conditions that must hold at any time that the action is performed. Note that, alternatively, preconditions could be encoded via additional conditions in cpropositions already appearing in the domain descriptions. The use of ppropositions is though simpler and more modular.
Definition 1
An ()planning domain is a pair , where is a domain description and is a set of ppropositions.
The semantic interpretation of the new type of sentences is that of integrity constraints on the domain descriptions.
Definition 2
Given a planning domain , satisfies , written , iff for all ppropositions needs in , and for all hpropositions happensat in , entails , where denotes the set of tpropositions obtained by transforming every fluent literal in into the respective tproposition at .
The planning problem is then defined as follows.
Definition 3
Given a planning domain and a goal , a (safe) plan for in is a set of hpropositions such that is consistent and :
,
.
Note that the initial state of the planning problem is assumed to be contained in the given domain description, and might amount to a set of tpropositions at some initial time point, or, more generally a set of tpropositions over several time points, not necessarily all coinciding with a unique initial time point.
The above definition of (safe) plan provides the formal foundation of the planner. It is easy to see that, through the properties of the modeltheoretic semantics of , a safe plan satisfies the requirements that (i) it achieves the given goal, and (ii) it is executable.
As an example,
let us consider the simple “car engine planning domain”
, with
consisting of statements (1),
(2) and (4) from the previous section 2
as well as:
Fill initiates Petrol (5)
(6)
and consisitng of the pproposition
(1)
Let the goal be holdsat
for some (final) time .
Then, a plan for is given by the
set where
.
This is
a safe plan in the sense that if we add
to , then both the goal and the pproposition in are entailed
by the augmented domain.
Consider now the domain obtained from by removing (4). Note that then has incomplete (initial) information about Petrol. Then, the above plan is no longer a safe plan for as there is no guarantee that the car will have petrol at the time when the TurnOn action is assumed to take place. A safe plan is now given by with . In the context of , the original plan will be called a weak plan. A weak plan is a set of hpropositions such that the extension it forms of the given domain description might not entail the given goal, but there is at least one model of the augmented domain description in which the goal holds true. A weak plan depends upon a set of assumptions, in the form of tpropositions, such that, if these assumptions were true (or could be made true) then the weak plan would be (or would become) a safe plan. In the example above is weak as it depends on the set of assumptions . is obtained from by adding the additional action , ensuring that is entailed by .
Definition 4
Given a planning domain and a goal , a weak plan for in is a set of hpropositions s.t. there exists a model of where:
, and
.
is conditional or depends on the set of assumptions
iff is a set of tpropositions such that
is not a safe plan for in but it is
a safe plan for in .
Note that a safe plan is always a weak plan and that if a weak plan is not conditional on any assumptions then it is necessarily a safe plan.
Computing conditional weak plans will form the basis for computing safe plans in the planner that we will develop in section 5. In general, if we have a weak plan for a given goal , conditional on a set of assumptions , then the original planning problem for is reduced to the subsidiary problem of generating a plan for . In effect, this process allows to actively fill in by further actions the incompleteness in the domain description.
However, in some cases we may have incomplete information on fluents that can not be affected by any further actions or at a time point (e.g. an initial time point) before which we cannot perform actions. In this paper we will concentrate on incompleteness of this kind, and we will study how to appropriately generate safe plans from weak plans despite the luck of information.
Of course, this may not always be possible, but there are
many interesting cases,
such as the following “vaccine” domain ,
where a safe plan exists:
(1)
(2)
Here, the fluent TypeO cannot be affected by any of action
(we cannot change the blood type) and
although its truth value is not known
(we have incomplete information on this)
we can still generate a safe plan for the
goal, holdsat ,
by performing both actions InjectA and
InjectB before time .
4 An Argumentation Formulation of
Argumentation has recently proved to be a unifying mechanism for most existing nonmonotonic formalisms [Bondarenko et al.1997, Dung1995]. In [Kakas, Miller, & Toni1999], we have adapted the [Dimopoulos & Kakas1995] argumentation framework to provide an equivalent reformulation of the original Language presented in section 2 and to develop a proof theory for computing entailment of tpropositions in domain descriptions. This will form the computational basis for our Planner. In this section, we give a brief review of the argumentation formulation for concentrating on the methods and results that would be needed for the Planner.
Let a monotonic logic be a pair consisting of a formal language (equipped with a negation operator ) and a monotonic derivability notion between sentences of the formal language. Then, an abstract argumentation program, relative to ), is a quadruple consisting of

a background theory , i.e. a (possibly empty) set of sentences in ,

an argumentation theory , i.e. a set of sentences in (the argument rules),

an argument base , and

a priority relation, on the ground instances of the argument rules, where means that has lower priority than .
Intuitively, any subset of the argument base can serve as a nonmonotonic extension of the (monotonic) background theory, if this extension satisfies some requirements. The sentences in the background theory can be seen as nondefeasible argument rules which must belong to any extension. One possible requirement that extensions of the background theory must satisfy is that they are admissible, namely that they are:

nonselfattacking and

able to counterattack any (set of) argument rules attacking it.
Informally, a set of argument rules from attacks another such set if the two sets are in conflict, by deriving in the underlying logic complimentary literals and , respectively, and the subset of the attacking set (minimally) responsible for the derivation of is not overall lower in priority than the subset of the attacked set (minimally) responsible for the derivation of . A set of rules A is of lower priority than another set B if it has a rule of lower priority than some rule in B and does not contain any rule of higher priority than some rule in B.
Then any given sentence of is a credulous (sceptical, resp.) nonmonotonic consequence of an argumentation program iff for some (all, resp.) maximally admissible extension(s) of the program.
A domain description without tpropositions can be translated into an argumentation program , referred to as , such that there is a onetoone correspondance between:
i) models of and maximally admissible sets of arguments of ;
ii)
entailment in and sceptical nonmonotonic
consequences of .
These equivalence results
continue to hold when
contains tpropositions or rpropositions
by simply considering only the admissible
sets that confirm the truth of all
such propositions in .
The basic elements of the translation of domain descriptions into argumentation programs are as follows. All individual h and cproposition translations as well as the relationships between timepoints are included in the background theory , so that for all timepoints , and action constants ,

iff

iff is in ,

for each cproposition
when in
(resp. when ),
contains the rule
( resp.), where if , for some fluent constant .
As an example, consider the domain description in section 2. Then, contains the fact and contains the rules
The remaining components of are independent of the chosen
domain :

consists of
Generation rules:
Persistence rules:
Assumptions:  


consists of all the generation rules and assumptions only.

is such that the effects of later events take priority over the effects of earlier ones. Thus persistence rules have lower priority than “conflicting” and “later” generation rules, and “earlier” generation rules have lower priority than “conflicting” and “later” generation rules. In addition, assumptions have lower priority than “conflicting” generation rules. For example, given the vocabulary of in section 2, and .
Given this translation of the language a proof theory can be developed by adapting the abstract, argumentationbased computational framework in [Kakas & Toni1999] to the argumentation programs . The resulting proof theory is defined in terms of derivations of trees, whose nodes are sets of arguments in attacking the arguments in their parent nodes. Let be a (nonselfattacking) set of arguments in such that , for some literal () that we want to prove to be entailed by ( can be easily built by backward reasoning). Then, two kinds of derivations are defined:

Successful derivations, building, from a tree consisting only of the root , a tree whose root is an admissible subset of such that .

Finately failed derivations, guaranteeing the absence of any admissible set of arguments containing .
Then, the given literal is entailed by if there exists a successful derivation with inital tree consisting only of the root and, for every set of argument rules in such that derives (in the complement of the given literal, every derivation for is finitely failed.
This method is extended in the obvious way to handle conjunctions of literals rather than individual literals by choosing and appropriately. Also when a domain contains tpropositions we simply conjoin these to the literals and . A similar extension of requiring that all the rpropositions are satisfied together with the goal at hand is applied for the domains containing such ramification statements.
The details of the derivations are not needed for the purposes of this paper. Informally, both kinds of derivation incrementally consider all attacks (sets of arguments in ) against and, whenever the root does not itself counterattack one of its attacks, a new a new set of arguments in that can attack back this attack is generated and added to the root. Then, the process is repeated, until every attack has been counterattacked successfully (successful derivation) by the extended root or until some attack cannot be possibly counterattacked by any extension of the root (finitely failed derivations) During this process, the counterattacks are chosen in such a way that they do not attack the root. For example, for the domain in section 2, given , monotonically deriving , a successful derivation is constructed as follows:
is attacked by , trivially counterattacked by itself. Thus, in this simple example, no extension of the root is required.
5 The Planner
The argumentationbased techniques discussed in the previous section can be directly extended to compute plans for goals. (In the sequel, we will sometimes mix the original Language formulation of problems and their corresponding formulation in the argumentation reformulation.) First, given a goal , in order to derive the (translation ( of the) goal in the underlying monotonic logic, a preliminary step needs to compute not only a set of argument rules , but (possibly) also a set action facts where
is a fluent constant
and is a timepoint
can be seen as a preliminary plan for the goal, that needs to be extended
first to a weak plan
and then to a safe plan.
Every time a new action fact is added to a plan,
any preconditions of the action need to be checked and, possibly, enforced,
by adding further action facts.
The computation of safe plans from weak ones
requires blocking, if needed, any
(weak) plan for the complement of any literal in the goal.
The following is a highlevel definition of the Planner in terms of the argumentationbased reformulation of the language:
Definition 5
Given a planning domain and a goal , an plan for is a set hpropositions such that , where is derived as follows:

Find a set of arguments and a set of action facts such that ;

Construct a set of arguments and a set of action facts such that (i) and , and (ii) is an admissible set of arguments wrt the augmented argumentation program , where
.

If every assumption in is a sceptical nonmonotonic consequence of the augmented argumentation program then .

Otherwise, is a set of action facts such that and:

For every set of arguments such that , where stands for the complement of some literal in , there exists no such that and is admissible wrt the augmented argumentation program , where
.

There exists a set such that and is admissible wrt .

In the first two steps, the Planner computes a weak plan for the given goal. If this does not depend on any assumptions (step 3) then it is a safe plan, as no plan for the complement of the goal is possible. Otherwise (step 4), the planner attempts to extend the weak plan in order to block the derivation of the complement, , of the goal. In order to do so, it considers each possible set of arguments, , which would derive (in the augmented background theory) and extends the plan so that can not belong to any admissible set of the resulting theory. A successful completion of step 4 means that the weak plan has been rendered into a safe plan .
The correctness of the planner is a direct consequence of the correctness of the argumentation proof theory on which it is based.
Theorem 1
Given a planning domain and a goal , let be the set of action facts computed at step 2 in definition 5. Then, the set
=
is a weak plan for .
Proof: As every admissible set of arguments is contained in some maximally admissible set [Kakas, Miller, & Toni1999] and, as discussed in section 4, every maximally admissible extension of the argumentationbased reformulation of a domain description in corresponds to a model of the original domain, computed at step 2 in definition 5 corresponds to a model of entailing . Because of the way tpropositions in domains are handled, as additional conjuncts in goals, this implies that satisfies all preconditions of actions in . Thus, is a model of such that and and the theorem is proven.
The following theorem can be proven in a similar way:
Theorem 2
Given a planning domain and a goal , let be an plan for . Then, is a safe plan for .
The highlevel definition of the Planner given above in definition 5 can be mapped onto a more concrete planner by suitably extending the argumentationbased proof theory proposed in [Kakas, Miller, & Toni1999]. can be computed directly while computing , by an abductive process which reasons backwards with the sentences in the background theory. Also, one needs to define suitable:

extended successful derivations, for computing incrementally from at step 2 and the final at step 4.2 from the extension of computed at step 4.1;

extended finitely failed derivations, for computing incrementally the required extension of at step 4.1.
As the original derivations, the extended ones incrementally consider all attacks against the root of the trees they build and augment the root so that all such attacks are counterattacked, until every attack has been counterattacked (successful derivations) or some attack cannot be counterattacked (failed derivations). Again, all nodes of trees are sets of arguments.
In addition, both new kinds of derivation are integrated with abduction to generate action facts so that success and failure are guaranteed, respectively. The action facts are chosen to allow for counterattacks to exist (successful derivations) or for additional attacks to be generated (failed derivations). Thus, extended successful derivations return both a set of argument rules in and a set of action facts, wherever extended failed derivations just return a set of action facts (the ones needed to guarantee failure).
Both kinds of derivations need to add to the given background theory (domain) the tpropositions that are preconditions of any abduced action. By the way tpropositions are handled, this amounts to extending dynamically the given goal to prove or disprove, respectively.
Finally, both kinds of derivation require the use of suspended nodes, namely that could potentially attack or counterattack their parent node if some action facts were part of the domain. These nodes become actual attacks and counterattacks if and when the action facts are added to the accumulated set. If, at the end of the derivations, these action facts are not added, then suspended nodes remain so and do not affect the outcome of the derivations.
Let us illustrate the intended behaviour of the extended derivations with a simple example. Consider the simple “car engine” domain in section 3, and let for some fixed final time . We will show how the safe , with , is computed.

and
, with . 
The only possible attack against is , which is trivially counterattacked by itself. Thus is admissible.

Let us examine the only assumption in , and try to prove that it holds in all admissible extensions of the given domain extended by . Consider the complement of the assumption, and let us prove that it holds in no admissible extension. This can be achieved by an ordinary finitely failed derivation (without abducing any additional action fact in order to do so), as there is an attack, , against the above complement, which cannot be counterattacked.
Thus, is a safe plan for . Consider now the domain . Then, step 3 above fails to prove that the given assumption is a sceptical nonmonotonic consequence of the augmented domain, and thus is just a weak plan.

is derivable via . This is attacked by , , which is counterattacked by , which, if added to , would provide an admissible extension in which holds. An extended finitely failed derivation can be constructed to prevent this as follows: the extended root is attacked by if is augmented to give , with . As the new attack cannot be counterattacked, the derivation fails.
Thus, is a safe plan for (for simplicity we omit step 4.2 here).
Note that the method outlined above relies upon the explicit treatment of nonground arithmetical constraints over timepoints (see [Kakas, Michael, & Mourlas1998]).
6 Incomplete planning problems
In this section we will illustrate through a series of examples the ability of the to produce safe plans for incompletely specified problems. In particular, we will consider problems where the incompleteness on some of the fluents is such that it cannot be affected by actions and hence our knowledge of them cannot be (suitably) completed by adding action facts to plans.
Let us consider again the “vaccine” domain at the end of section 3, and the goal holdsat , for some final time . We will show how, in this example, the planner reasons correctly with the “excluded middle rule” to produce a safe plan for , despite the fact that it is not known whether the blood if of or not. A weak plan for the goal is given by with . Indeed: given , , is admissible (steps 1 and 2). The plan is conditional on the set of assumptions (step 3). Note that there is no action that can affect the fluent , so cannot be extended so that it can derive the assumption. Nevertheless, we can extend to a safe plan by constructing failed derivations for , as illustrated below (step 4.1).
The only way to derive is by means of the set of arguments . This is attacked by itself, which can be counterattacked (only) if the root is extended via the assumption .
The initial root and thus the extended root are attacked by the set of arguments with , provided we add to the action fact to give a new plan . In order to successfully counterattack the new attack we need to add further to the root the argument , obtaining a new root .
is (newly) attacked by , , if , and by , , if . (Note also that necessarily as otherwise would attack itself.) These attacks can only be counterattacked via one generation rule for and one for , respectively. But no such generation rules are possible.
This concludes the construction of the only required finitely failed extended derivation for . (Again, we omit step 4.2.) The computed plan is indeed a safe plan for . Note that no ppropositions are present in this example and thus no extended domain is generated.
The following example illustrate the use of
observations (in the form of tpropositions)
to provide some (partial) implicit information on the domain.
Consider the domain :
(1)
(2)
(3)
(4)
(5)
(6)
and the goal holdsat , for some
final time .
The fluents TypeA and Weak are incompletely specified.
but the observations (tpropositions)
essentially give indirectly the information that either
TypeA or Weak must hold. This then allows, similarly to the previous example,
to generate a safe plan for by applying
both actions InjectC and InjectD.
A weak plan with is first generated (steps 1 and 2), conditional on the assumption set (step 3). Then, step 4.1 generates , proving . is attacked by and can only be defended if is extended to , which is admissible. However, needs to be extended to confirm the tpropositions. To confirm , the assumption needs to be added to . Moreover, to confirm , we need to add either or , with . But adding the fist such set would render the resulting set nonadmissible (as both and belong to it). Hence, the only viable extension is . This set is admissible (if we also abduce the action ). To prevent that, we find an attack that cannot be counterattacked successfully: , , extending to . This attack can only be counterattacked by adding the assumption , rendering the root selfattacking (as belongs to it). Thus, is a safe plan.
The next example shows how the planner exploits
ramification information to generate safe plans for incompletely
specified problems. Let be:
(1)
(2)
(3)
and holdsat , for some
final time .
The fluents TypeO and Strong are incompletely specified. The ramification statement requires that
either or Strong must hold (at any time).
Then, similarly to the above examples, the can generate
the safe plan
,
, with .
During the computation of this plan,
to render the only proof
of nonadmissible we generate, in addition to the attack
given by the weak plan , an extra
attack by adding the action .
The first attack can only be counterattacked by
and the second only by . As
these assumptions persist, it is not possible to satisfy the
ramification statement at any time between and .
Hence there is no admissible extensions that can prove
and the plan is safe.
7 Conclusions
We have shown how we can formulate planning within the framework of the Language and have used the argumentation reformulation of this framework to define a planner that is able to solve problems with incomplete information.
A planner with similar aims has been defined in [Finzi, Pirri, & Reiter1999]. Both this planner and our planner regress to a set of assumptions which, when entailed by the incomplete theory, guarantees the plan to be safe. However, [Finzi, Pirri, & Reiter1999] uses a classical theorem prover to check explicitly this entailment at the initial situation (in general, the required entailment is the nonmonotonic entailment of the action framework in which the planning problems are formulated). Instead, in the planner the incompleteness, and hence the assumptions to which one regresses, need not refer to the initial state only. Moreover, the planner uses these assumptions in the computation of the initial possibly weak plan and then to help in the extension of this to a safe plan. We are studying other planning algorithms (within the same argumentation formulation of the language ) which use more actively the assumptions on which weak plans are conditional. One such possibility is to try to extend the plan so as to reprove the goal but now assuming apriori the contrary of these assumptions. The search space of this type of planning algorithm is different and comparative studies of effeciency can be made.
[Smith & Weld1998] introduce the notion of conformant planning for problems with incomplete information about the initial state and for problems where the outcome of actions may be uncertain. Our safe plans correspond to conformant plans for problems of the first type. The emphasis of this work is on the development of an efficient extension of Graphplan to compute conformant plans, by first considering all possible worlds and, in each world, all possible plans, and then extracting a conformant plan by considering the interactions between these different plans and worlds.
[Giunchiglia2000] considers the problem of planning with incomplete information on the initial state within the action language . This is an expessive language that allows concurrent and nondeterministic actions together with ramification and qualification statements. Our safe plans correspond to the notion of valid plans which in turn are conformant plans. To find a valid plan, the Planner generates a possible plan and then tests, using a SAT solver, whether the generated plan can be executed in all the possible models. Possible plans can be seen as weak plans, but we allow in the “testing phase” for the dynamic expansion of the weak plan into a safe plan.
A general difference with both [Smith & Weld1998, Giunchiglia2000] is that the planner is goaloriented, with an active search for actions both for the satisfaction of the goal and for ensuring that the generated plan is executable in any of the many possible worlds for the problem.
Another difference is at the level of expressivenes, in that we allow observations (not only at an initial state) and can exploit indirect information given by them to help handle the incompleteness.
We are currently developing an implementation of the planner based on an earlier implmentation of the language and aim to carry out experiments with standard planning domains. In this initial phase of our study we have not considered efficiency issues, concentrating specifically on the problem of planning under incompletness. In future work we need to address these issues by studying the problem of effective search in the space of solutions. One way to do this is to consider the integration of constraint solving in the planner as in constraint logic programming and its extension with abduction [Kakas, Michael, & Mourlas1998, Kowalski, Toni, & Wetzel1998].
We are considering several extensions of the planner to allow for more general plans e.g. containing nondeterministic actions (or actions with uncertain effects). These extensions would require corresponding extensions of the expressiveness of the Language . Also, the extension of the planner to accommodate sensing, in the form of accepting further observations (tpropositions) in the problem description, is a natural problem for future work.
Acknowledgements
This research has been partially supported by the EC KeepInTouch Project “Computational Logic for Flexible Solutions to Applications”. The third author has been supported by the UK EPSRC Project “Logicbased multiagent systems”.
References
 [Bondarenko et al.1997] Bondarenko, A.; Dung, P. M.; Kowalski, R. A.; and Toni, F. 1997. An abstract, argumentationtheoretic framework for default reasoning. Journal of Artificial Inelligence 93(12):63–101.
 [Denecker, Missiaen, & Bruynooghe1992] Denecker, M.; Missiaen, L.; and Bruynooghe, M. 1992. Temporal reasoning with abductive event calculus. In Proceedings of ECAI’92.
 [Dimopoulos & Kakas1995] Dimopoulos, Y., and Kakas, A. 1995. Logic programming without negation as failure. In Proceedings of ILPS’95, 369–383.
 [Dimopoulos, Nebel, & Koehler1997] Dimopoulos, Y.; Nebel, B.; and Koehler, J. 1997. Encoding planning problems in nonmonotonic logic programs. In Proceedings of ECP’97, Springer Verlag, 169–181.
 [Dung1995] Dung, P. 1995. The acceptability of arguments and its fundamental role in nonmonotonic reasoning and logic programming and nperson game. Journal of Artificial Inelligence 77:321–357.
 [Finzi, Pirri, & Reiter1999] Finzi, A.; Pirri, F.; and Reiter, R. 1999. Open world planning in the situation calculus. In Technical Report, University of Toronto.
 [Giunchiglia2000] Giunchiglia, E. 2000. Planning as satisfiability with expressive action languages: Concurrency, constraints and nondeterminism. In Proceedings of KR’2000.
 [Kakas & Miller1997a] Kakas, A., and Miller, R. 1997a. Reasoning about actions, narratives and ramifications. In J. of Electronic Transactions on A.I. 1(4), Linkoping University E. Press, http://www.ep.liu.se/ea/cis/1997/012/.
 [Kakas & Miller1997b] Kakas, A., and Miller, R. 1997b. A simple declarative language for describing narratives with actions. In JLP 31(1–3), 157–200.
 [Kakas & Toni1999] Kakas, A., and Toni, F. 1999. Computing argumentation in logic programming. In JLC 9(4), 515–562, O.U.P.
 [Kakas, Michael, & Mourlas1998] Kakas, A.; Michael, A.; and Mourlas, C. 1998. Aclp: a aase for nonmonotonic reasoning. In Proceedings of NMR98, 46–56.
 [Kakas, Miller, & Toni1999] Kakas, A.; Miller, R.; and Toni, F. 1999. An argumentation framework for reasoning about actions and change. In LPNMR’99, 78–91, Springer Verlag.
 [Kowalski, Toni, & Wetzel1998] Kowalski, R.; Toni, F.; and Wetzel, G. 1998. Executing suspended logic programs. Journal of Foundamenta Informaticae 34(3):203–224.
 [Levesque1996] Levesque, H. 1996. What is planning in the presence of sensing? In Proceedings of AAAI’96, 1139–1146.
 [Lifschitz1999] Lifschitz, V. 1999. Answer set planning. In Proceedings of ICLP’99, 23–37.
 [Shanahan1997] Shanahan, M. 1997. Event calculus planning revisited. In Proceedings of ECP’97, Springer Verlag, 390–402.
 [Smith & Weld1998] Smith, D., and Weld, D. 1998. Conformant graphplan. In Proceedings of AAAI’98.
Comments
There are no comments yet.