Sensor Synthesis for POMDPs with Reachability Objectives

09/29/2017 ∙ by Krishnendu Chatterjee, et al. ∙ 0

Partially observable Markov decision processes (POMDPs) are widely used in probabilistic planning problems in which an agent interacts with an environment using noisy and imprecise sensors. We study a setting in which the sensors are only partially defined and the goal is to synthesize "weakest" additional sensors, such that in the resulting POMDP, there is a small-memory policy for the agent that almost-surely (with probability 1) satisfies a reachability objective. We show that the problem is NP-complete, and present a symbolic algorithm by encoding the problem into SAT instances. We illustrate trade-offs between the amount of memory of the policy and the number of additional sensors on a simple example. We have implemented our approach and consider three classical POMDP examples from the literature, and show that in all the examples the number of sensors can be significantly decreased (as compared to the existing solutions in the literature) without increasing the complexity of the policies.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In this work we study synthesis of sensor requirements for partially defined POMDPs, i.e., required precision of sensors, need for additional sensors, minimal set of necessary sensors, etc.

POMDPs. Markov decision processes (MDPs) are a standard model for systems that have both probabilistic and nondeterministic behaviors [27], and they provide a framework to model and solve control and probabilistic planning problems [26, 40]. The various choices of control actions for the controller (or planner) are modeled as nondeterminism while the stochastic response to the control actions is represented by the probabilistic behavior. In partially observable MDPs (POMDPs) to resolve the nondeterministic choices in control actions the controller observes the state space according to observations, i.e., the controller can only view the observation of the current state, but not the precise state [38]. POMDPs are a widely used model for several applications and research fields, such as in computational biology [23], speech processing [37], image processing [22], software verification [11], robot planning [28]

, reinforcement learning 

[29], to name a few.

Reachability objectives. One of the most basic objectives is the reachability objective, where given a set of target states, the objective requires that some state in the target set is visited at least once. The classical computational questions for POMDPs with reachability objectives are as follows: (a) the quantitative question asks for the existence of a policy (that resolves the choice of control actions) that ensures the reachability objective with probability at least ; and (b) the qualitative question is the special case of the quantitative question with (i.e., it asks that the objective is satisfied almost-surely).

Previous results. The quantitative question for POMDPs with reachability objectives is undecidable [39] (and the undecidability result even holds for any approximation [34]). In contrast, the qualitative question is EXPTIME-complete [15, 3]. The main algorithmic idea to solve the qualitative question (that originates from [16]) is as follows: first construct the belief-support MDP explicitly (which is an exponential-size perfect-information MDP where every state is the support of a belief), and then solve the qualitative analysis on the perfect-information MDP (which is in polynomial time [19, 18, 17]). This gives an EXPTIME upper bound for the qualitative analysis of POMDPs, and a matching EXPTIME lower bound has been established in [15].

Modeling and analysis. In the design of systems there are two crucial phases, namely, the modeling phase, where a formal model of the system is constructed, and the analysis phase, where the model is analyzed for correctness. Currently POMDPs are typically used in the analysis phase, where in the modeling phase a fully specified POMDP for the system is constructed, which is analyzed (in the model-checking terminology this is called a posteriori analysis or verification). However, POMDPs are seldom used in the modeling phase, where the model is not yet fully specified.

Partially specified POMDPs. In this work we consider the problem in which a POMDP is partially specified and can be used also in the modeling phase (i.e., a priori verification). To motivate our problem consider the standard applications in robotics or planning, where the state space of the POMDP is obtained from valuations of the variables of the system, and the sensors are designed to obtain the observations. We consider a partially specified POMDP where the state space and the transitions are completely specified, but the observations are not. This corresponds to scenarios where (i) the state space of the system is designed but the sensors have not yet been designed [10, 36] or (ii) the sensors are designed and there is a possibility to augment and annotate the state space, in order to make the task for the agent simpler. In both scenarios the goal is to synthesize the observations (that is from the partially specified POMDP obtain a fully specified POMDP) such that in the resulting POMDP there is a policy that satisfies the reachability objective almost-surely. Since additional sensors increase complexity, one goal is to obtain as few additional observations as possible; and since policies represent controllers another goal is to ensure that the resulting policies are not too complex [2]. Concretely, we consider the following problem: given a partially specified POMDP (where the observations are not completely specified), the problem asks to synthesize at most additional observations such that in the resulting POMDP there is a policy with memory size at most to ensure that the reachability objective is satisfied almost-surely. Note that the problem we consider provides trade-offs between the additional observations (i.e., ) and the memory of the policy (i.e., ).

Significance of qualitative question. The qualitative question is of great importance as in several applications it is required that the correct behavior happens with probability 1. For example, in the analysis of randomized embedded schedulers, the important question is whether every thread progresses with probability 1. Moreover, though it might be sufficient that the correct behavior arises with probability at least , the correct choice of the threshold is still challenging, due to simplifications and imprecisions introduced during modeling. Importantly it has been shown recently [13] that for the fundamental problem of minimizing the total expected cost to reach the target set [5, 9, 31, 30] under positive cost functions (or the stochastic shortest path problem), it suffices to first compute the almost-sure winning set, and then apply any finite-horizon algorithm for approximation. Moreover, the qualitative analysis problem has also a close connection with planning: while the qualitative analysis problem is different as compared to strong or contingent planning [35, 21, 1], it is equivalent to the strong cyclic planning problem [21, 4]

. Thus results for qualitative analysis of POMDPs carry over to strong cyclic planning. Finally, besides the practical relevance, almost-sure convergence, like convergence in expectation, is a fundamental concept in probability theory, and provides the strongest probabilistic guarantee 

[24].

Our contributions. Our main contributions are as follows. First, we show that when and are constants, then the problem we consider is NP-complete. Note that the unrestricted problem (without restrictions on and

) is EXPTIME-complete, because we can use observations of at most the size of the state space, and the general qualitative analysis problem of fully specified POMDPs is EXPTIME-complete. Second, we present an efficient reduction of our problem to SAT instances. This results in a practical, symbolic algorithm for the problem we consider and state-of-the-art SAT solvers, from artificial intelligence as well as many other fields 

[6, 41, 7], can be used for our problem. Then, we illustrate the trade-offs between the amount of memory of the policy and the number of additional sensors on a simple example. Finally, we present experimental results. We consider three classical POMDP examples from the literature, and show that in these examples the number of observations (hence the number of sensors in practice) can be significantly decreased as compared to the existing models in the literature, without increasing the memory size of the policies. We report scalability results on three examples showing that our implementation can handle POMDPs with ten thousand states.

2 Preliminaries

A probability distribution

on a finite set is a function such that , we denote by the set of all probability distributions on and by

the uniform distribution over a finite set

. For a distribution we denote by the support of .

POMDPs. A Partially Observable Markov Decision Process (POMDP) is defined as a tuple where

  • (i)  is a finite set of states;

  • (ii)  is a finite alphabet of actions;

  • (iii)  is a probabilistic transition function that given a state and an action gives the probability distribution over the successor states, i.e., denotes the transition probability from to given action ;

  • (iv)  is a finite set of observations;

  • (v)  is the unique initial state;

  • (vi)  is a probabilistic observation function that maps every state to a probability distribution over observations.

Plays. A play (or a path) in a POMDP is an infinite sequence of states and actions such that and, for all , we have . We write for the set of all plays.

Policies. A policy (or a strategy) is a recipe to extend prefixes of plays. That is, a policy is a function that, given a finite history of observations and actions, selects a probability distribution over the actions to be played next. We present an alternative definition of policies with finite memory for POMDPs.

Policies with Memory. A policy with memory is a tuple with the following elements:

  • is a finite set of memory elements.

  • The function is the action selection function that maps the current memory element to a probability distribution over actions.

  • The function is the memory update function that, given the current memory element, the current observation and action, updates the memory element probabilistically.

  • The element is the initial memory element.

We will say a policy has memory size if the number of memory elements is , i.e., .

Probability Measure. Given a policy and a starting state , the unique probability measure obtained given is denoted as  [8, 32].

Reachability Objectives. Given a set of target states, a reachability objective in a POMDP is a measurable set of plays defined as follows: , i.e., the set of plays, such that a state from the set of target states is visited at least once.

In the remainder of the paper, we assume that the set of target states consists of a single goal state, i.e., . This assumption is w.l.o.g. because it is always possible to add a state with transitions from all target states in . Note, that there are no costs or rewards associated with transitions.

Almost-Sure Winning. A policy is almost-sure winning for a POMDP with a reachability objective iff . In the sequel, whenever we refer to a winning policy, we mean an almost-sure winning policy.

3 Partially Defined Observation Functions

Traditionally, POMDPs are equipped with a fully defined observation function that assigns to every state of the POMDP a probability distribution over observations. In order to model the partially defined observation function, we assume the input POMDP  is given with partially defined observation function . The probability distributions in the range of the function contain an additional symbol , and whenever for a state we have , we will say that the state  has observations only partially defined.

Observation function completions. We say a fully defined observation function is a completion of a partially defined observation function (and write ) if all of the following conditions are met:

  1. There exists a set of additional observations and the observation function maps the states only to the set of old observations and the newly added observations , i.e., the observations are defined for all states.

  2. The function agrees on assigned observations with , i.e., for all states and observations , we have .

Intuitively, given a POMDP with a reachability objective and a partially defined observation function , Problem 1 asks, whether there exists a completion not using more than additional observations such that in the resulting POMDP there exists an almost-sure winning policy not using more than memory elements. More formally we study:

Problem 1

Given a POMDP with a reachability objective , and two integer parameters and , decide whether there exists a completion using additional observations and an almost-sure winning policy for the objective in the POMDP , with and .

Example 1

Consider a POMDP depicted in Figure 1. There are three states corresponding to the position of the agent on the grid. The agent starts in the leftmost grid cell, and tries to move to the rightmost grid cell, where a treasure is hidden. There are three deterministic actions available to the agent: move-left, move-right, and grab-treasure. When the action grab-treasure is played in the rightmost cell, the agents wins, if it is played in any other cell the agent loses. The remaining two movement actions move the agent in the corresponding directions, if the wall is hit the agent loses.

+

g
Figure 1: Grid POMDP
  • In the setting where and , the problem is satisfiable by a policy that plays actions in the following sequence move-right, move-right, and grab-treasure.

  • In the setting where and , the problem is satisfiable by an observation function that assigns the rightmost grid cell an observation different from the two remaining grid cells. The policy plays action move-right in the first memory element until an observation corresponding to the rightmost cell is observed. After that it switches to the second memory element, where it plays action grab-treasure.

  • In the setting where and , the problem is not satisfiable, i.e., there is no two-memory almost-sure winning policy if all the states have the same observation.

4 Complexity and SAT Encoding

In this section we consider properties of almost-sure winning policies, the complexity of Problem 1, and its encoding to SAT instances.

Complexity

Theorem 1

Deciding Problem 1 given constant parameters  and  is NP-complete.

Main ideas. We remark that Theorem 1 holds even if parameters and are polynomial in the size of the POMDP.

  • Inclusion in NP. Note that for polynomial and

    , a guess of the observation completion and the policy (if they exist) is polynomial. Thus we have polynomial-sized witnesses. Given a policy and an observation function, we obtain a Markov chain where qualitative analysis is polynomial time using standard discrete graph algorithms 

    [19, 18, 17]. Hence inclusion in NP follows.

  • NP-hardness. An NP-hardness result was established for a similar problem, namely, for no memory policies in fully specified two-player games with partial-observation, in [20, Lemma 1]. The reduction constructed a game that is a DAG (directed acyclic graph), and replacing the adversarial player with a uniform distribution over choices shows that Problem 1 is NP-hard even with (no memory policies) and (fully specified observation).

SAT Encoding

In this section we present SAT encoding for Problem 1, which generalizes the special case of fully specified observation function studied in [14].

Standard Results. We now present two basic lemmas. The following lemma presents a standard result for qualitative analysis of POMDPs, and it basically follows from the fact that in a Markov chain for qualitative analysis, the exact probability distributions are not important, and the supports of the distributions completely characterize almost-sure winning.

Lemma 1

Given an almost-sure winning policy for a objective, the policy , where for the action selection function is defined as , and for , and the memory update function is defined as

and is also an almost-sure winning policy for .

Given a policy and a POMDP and two state-memory pairs we define a predicate of length where , such that , and , and for all there exists an action , observation , such that , , , and . Let be the set of all pairs such that is for some . The following lemma states that almost-sure winning policies are characterized by paths of bounded length to the goal state.

Lemma 2

A policy is almost-sure winning in a POMDP  iff for every state-memory pair the predicate (resp. . Lemma 2 allows to characterize state-memory pairs that are almost-sure winning by encoding the boolean predicate that for a sufficiently large parameter , e.g., , will be satisfiable if and only if there exists completion of the observation function using no more than additional observations and an almost-sure winning policy with no more than memory elements, i.e., the associated instance of Problem 1 is true. We define the set of additional observations, and denote by the disjoint union of the old observations in and the newly added observations in . We describe the CNF formula by defining all of its Boolean variables, followed by the clausal constraints over those variables.

Boolean Variables. We first introduce the variables.

  • We begin by encoding the action selection function of the policy . We introduce a Boolean variable for each memory-state and action to represent that action is played with positive probability in memory state , i.e., that (see Lemma 1).

  • Next, we encode the memory update function . We introduce a Boolean variable for each pair of memory-states , observation and action . If such a variable is assigned to True, it indicates that, if the current memory-state is , the current observation is , and action is played, then it is possible that the new memory-state is , i.e., (see Lemma 1).

  • We encode the completion of the partially defined observation function . We introduce a variable for every state and observation . The intuitive meaning is that the observation function completion assigns to state observation with positive probability.

  • Boolean variables for each state and memory state indicate which (state, memory-state) pairs are reachable by the policy.

  • The variables for all , , and , correspond to the proposition that there is a path of length at most from to the goal state, that is compatible with the policy.

Logical Constraints. We introduce the following clause for each to ensure that at least one action is chosen with positive probability for each memory state (see Lemma 1):

To ensure that the memory update function is well-defined, we introduce the following clause for each , and (see Lemma 1):

To ensure that every state has at least one observation in the support of the observation function, we introduce the following clause for every state :

For every state and every , we enforce the consistency by adding the clause:

For every state , with observations fully defined, i.e., , for every additional observation we add the following clause:

The following clauses ensure that the variables will be assigned True for all pairs that are reachable using the policy:

Such a clause is defined for each pair of memory-states, each pair of states, each observation and each action such that .

Therefore, the fact that the initial state and initial memory element is reachable is enforced by adding the single clause

We introduce the following unit clause for each and , which says that the goal state with any memory element is reachable from the goal state and that memory element using a path of length at most :

Next, we define the following binary clause for each and so that, if the pair of a state and a memory element is reachable, then the existence of a path from to the goal state is enforced (see Lemma 2):

Finally, we use the following constraints to define the value of variables for all , , and in terms of the chosen policy (see Lemma 1 and definition of predicate , and . We use the standard Tseitin encoding to translate this formula to clauses. The conjunction of all clauses defined above forms the CNF formula .

Theorem 2

The formula for is satisfiable, iff there exists a completion of the observation function using no more than additional observations and an almost-sure winning policy using no more than memory elements.

Proof [Proof sketch.] Satisfiable formula completion and a policy: If the formula is satisfiable, the SAT solver outputs a valuation of the variables. The boolean variables that are true according to encode the completion of the observation function, variables encode the action selection function , and encode the memory update function . The fact that the encoded policy is almost-sure winning follows from the clauses and lemmas 1 and 2.

Completion and a policy satisfiable formula: Given a completion of the observation function and an almost-sure winning policy we show how to construct a satisfying valuation for the formula . The completion of the observation function gives the valuation for the variables, the action selection function for the variables, and the memory update function for the variables. The valuation for the and is obtained by constructing and examining which state-memory pairs are reachable and the shortest path to a goal state.

Partial Specification with Constraints

In the previous section we have presented a SAT encoding for POMDPs with partially specified observation functions. In this section we discuss additional constraints that might be desirable and our encoding can be easily extended to handle these constraints.

Non-distinguishable states. In many scenarios it might be the case that there are states that cannot be distinguished by any available sensors, i.e., the observation assigned to these states must necessarily be the same. This can be enforced by adding the following clause for any pair of non-distinguishable states and all the observations .

Distinguishable states. In some scenarios it might be the case that two states cannot have the same observation. This can be enforced by adding the following clause for any pair of states and all the observations .

Dependencies among observations. Various dependencies among observations can be expressed. For example in a state  whenever an observation is observed with positive probability also observation is observed with positive probability can be expressed by the following clause:

Adding sensor variables. Let be a POMDP with the set of observations and observation function . By adding a new sensor , that receives values , the new set of observations in the modified POMDP is with observation function . This corresponds to increasing the observation dimensionality, rather than increasing cardinality. Our approach allows to synthesise observations in POMDP as follows:

  • We set the observations of all states to be undefined.

  • We add constrains to the resulting formula as follows: In POMDP an observation for some is received with positive probability in state , i.e., if and only if the observation is received in state in the original POMDP  with positive probability, i.e., .

    1. For every state and observation we add the following clause:

    2. For every state and observation we add the following constraint:

Deterministic observations function. For every state we introduce the following clause:

Remark 1

Deterministic observation functions are a special case of probabilistic observation functions. Therefore, the number of observations in the deterministic case is an upper bound for the probabilistic case. However, a probabilistic observation function might require less observations.

5 Experimental Results

In this section we present experimental results and evaluate our approach on several POMDP examples that were published in the literature. We have implemented the encoding presented in Section 4 as a program in Python and use the MiniSAT SAT solver [25] on an Intel(R) Xeon(R)@ 3.50GHz CPU.

Remark 2

In our experimental results we consider the synthesis of deterministic observation functions. As mentioned in Remark 1, deterministic observation functions provide upper bound for the number of observations required by probabilistic observation functions. Thus synthesizing deterministic observation functions with fewer observations is the more challenging problem, which we consider to illustrate the effectiveness of our approach.

We present our results first on a small, simple example to illustrate how various selections of the memory bounds and additional observation bounds affect the computed policies and discuss the possible trade-offs between the memory vs. observation budgets in Problem 1.

Name Grid # States Time (s) SAT
Escape2 19 5 5 0.18
Escape3 84 5 5 1.22
Escape4 259 5 5 5.69
Escape5 628 5 5 19.31
Escape6 1299 5 5 52.65
Escape7 2404 5 5 131.77
Escape8 4099 5 5 280.19
Escape9 6564 5 5 674.42
Escape10 10003 5 5 1519.48
Table 1: Escape instances.
Name Grid # States Time (s) SAT
Hallway1 38 2 2 0.22
Hallway1 38 3 2 0.55
Hallway2 190 3 2 5.95
Hallway2 190 3 3 5.28
Hallway2 190 4 2 20.82
Hallway3 226 3 2 6.53
Hallway3 226 3 3 7.33
Hallway3 226 4 2 28.98
Table 2: Hallway instances.
Name # States Time (s) SAT
RockSample4 351 2 2 2.43
RockSample5 909 2 2 18.14
RockSample6 2187 2 2 95.28
RockSample6 2187 2 3 165.87
RockSample6 2187 3 2 519.21
RockSample7 5049 2 2 565.49
RockSample7 5049 3 2 565.43
RockSample7 5049 3 3 5196.40
Table 3: RockSample instances.

Deterministic Hallway

We consider a simplification of the well-known Hallway problem [33], where an agent navigates itself on a rectangular grid (see Figure 2a). There are four actions , , , and available to the agent. For simplicity, all the movement on the grid is deterministic (probabilistic movement is considered in the Hallway problem later in the scalability evaluation). There are multiple initial states (depicted as in Figure 2a) and the agent starts in any of them with uniform probability. The objective of the agent is to reach any of the goal states (depicted as in Figure 2a). Whenever an agent hits a wall or enters a trap state (depicted as in Figure 2a) an absorbing state is reached, from which it is no longer possible to reach the desired goal states. We consider that there are no observations defined in the POMDP, i.e., for all states we have .

+

+

g

g

x

(a)

+

+

g

g

x

(b)

+

+

g

g

x

(c)
Figure 2: (a) Deterministic Hallway POMDP. (b) Synthesized  for Hallway and . (c) Synthesized for Hallway and .

memory elements and observations. In the setting, where and the SAT solver reports that there exists a completion of and an almost-sure winning policy . We depict the synthesized observation function in Figure 2b, where the red color corresponds to the new synthesized observation and green color corresponds to new synthesized observation . The synthesized policy uses four memory elements . The synthesized action selection function is defined as . The computed policy initially updates its memory element to in case the first observation is (red) and to if the observation is (green), i.e., the information whether the agent starts in the left or right start state is stored in the memory element. Then action is played until the bottom row is reached (this is detected by changed observations, from to in the left part, and from to in the right part). Finally, in case the agent is in the left part, memory element is updated to and by action the goal state is reached. Similarly, in the right part, memory element is updated to and by action the goal state is reached.

memory elements and observations. In the setting, where and the SAT solver reports there exists a completion of and an almost-sure winning policy . This allows to reduce the number of memory elements needed, provided we add one more observation to the POMDP. We depict the synthesized observation function in Figure 2c, where red color corresponds to the new synthesized observation , green color to the new observation , and blue color corresponds to the new observation . The synthesized policy uses three memory elements . The synthesized next-action selection function is defined as . The computed policy starts with the initial memory element and plays actions until either observation or is received. In case (red) is observed, the policy is updated to memory element and reaches the goal state with action . In case (blue) is observed, the policy is updated to memory element and reaches the goal state with action .

memory elements and observations. In the setting, where and the SAT solver reports there does not exist a completion of that would allow for a two-memory almost-sure winning policy. This follows from the fact that at least three memory elements are necessary for actions , i.e., with the restriction , there are no available memory elements to store additional information. As the agent needs to avoid hitting into walls, a randomized action selection function cannot be used. It follows that there can be at most one memory element such that . It follows easily that, with only two observations and one memory element for action , it is not possible to detect that the agent is already present in the bottom row.

Scalability Evaluation

In this part we demonstrate the scalability of our approach on three well-known POMDP examples of varying sizes. Our results show that in all cases the observations considered in these examples from the literature are unnecessarily refined and significantly less precise observations suffice even without making the policies more complicated.

Escape POMDPs. The problem is originally based on a case study published in [44], where the goal is to compute a policy to control a robot in an uncertain environment. A robot navigates on a square grid. There is an agent moving over the grid, and the robot must avoid being captured by the agent forever. The robot has four actions: move north, south, east, or west. These actions have deterministic effects, i.e., they always succeed. In the original POMDP instance, there are different observations.

Memory ()

Observations ()
Figure 3: Memory vs. observation trade-off for Escape2

The memory and observation trade-offs for the smallest instance Escape2 are depicted on Figure 3, which shows that for and there exists an almost-sure policy. However, it is possible to significantly decrease the number of observations to and there is still an almost-sure winning policy with . If is increased to , it is possible to decrease to . If the memory size is further increased to , it is possible to reduce the number of observations to . We illustrate the scalability results in Table 1, where we report the number of states, the parameters , the running time of the SAT solver, and whether the formula is satisfiable. In all the cases with and there exists an almost-sure policy and the sizes of the instances go up to states. There are approx. clauses in the largest instance.

Hallway POMDPs. Hallway POMDP instances are inspired by the Hallway problem introduced in [33] and used later in [43, 42, 9, 12]. In the Hallway POMDPs, a robot navigates on a rectangular grid. The grid has barriers through which the robot cannot move, as well as trap locations that destroy the robot. The robot must reach a specified goal location. The robot has three actions: move forward, turn left, and turn right. The actions may all fail with positive probability, in which case the robot’s state remains unchanged. The state is therefore comprised of the robot’s location in the grid, and its orientation. Initially, the robot is randomly positioned among multiple start locations. Originally, the POMDP instances contain different observations. The results are reported in Table 2, where we consider three different Hallway instances and vary the parameters for the number of memory elements and for the number of additional observations. For every entry we report the number of states, the parameters , the running time of the SAT solver, and whether the formula is satisfiable. The results show that in all cases two observations are sufficient. In the smallest instance memory of size is sufficient. For larger instances, memory size needs to be increased to . There are approx. clauses in the largest instance.

RockSample POMDPs. We consider a variant of the RockSample problem introduced in [42] and used later in [9, 12]. The RockSample instances model rover science exploration. Only some of the rocks have a scientific value, and we will call these rocks “good”. Whenever a bad rock is sampled the rover is destroyed and a losing absorbing state is reached. If a rock is sampled for the second time, then with probability the action has no effect. With the remaining probability the sample is destroyed and the rock needs to be sampled one more time. An instance of the RockSample problem is parametrized with a parameter : is the number of rocks on a grid of size . The goal of the rover is to obtain two samples of good rocks. The results are presented in Table 3. Originally, the POMDP instances contain different observations. The results show that, with increasing sizes of the POMDP instances, either increasing the memory size or increasing the number of additional observations is enough to obtain an almost-sure winning policy. There are approx. clauses in the largest instance.

6 Conclusion

In this work we consider POMDPs with partially specified observations, and the problem to synthesize additional observations along with small-memory almost-sure winning policies. Interesting directions of future work would be to consider (a) other aspects of partial specifications (such as transitions), and (b) other objectives, such as discounted-sum.

References

  • [1] A. Albore, H. Palacios, and H. Geffner. A translation-based approach to contingent planning. In IJCAI, pages 1623–1628, 2009.
  • [2] C. Amato, D.S. Bernstein, and S. Zilberstein. Optimizing fixed-size stochastic controllers for POMDPs and decentralized POMDPs. AAMAS, 21(3):293–320, 2010.
  • [3] C. Baier, M. Größer, and N. Bertrand. Probabilistic omega-automata. J. ACM, 59(1), 2012.
  • [4] P. Bertoli, A. Cimatti, and M. Pistore. Strong cyclic planning under partial observability. ICAPS, 141:580, 2006.
  • [5] D.P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995. Volumes I and II.
  • [6] A. Biere. Lingeling, plingeling and treengeling entering the SAT competition 2013. In SAT Comp., 2013.
  • [7] A. Biere, A. Cimatti, E.M. Clarke, M. Fujita, and Y. Zhu. Symbolic model-checking using SAT procedures instead of BDDs. In DAC, pages 317–320, 1999.
  • [8] P. Billingsley, editor. Probability and Measure. Wiley-Interscience, 1995.
  • [9] B. Bonet and H. Geffner. Solving POMDPs: RTDP-Bel vs. point-based algorithms. In IJCAI, pages 1641–1646, 2009.
  • [10] A. Censi. A Mathematical Theory of Co-Design. arXiv preprint arXiv:1512.08055, 2015.
  • [11] P. Cerný, K. Chatterjee, T. A. Henzinger, A. Radhakrishna, and R. Singh. Quantitative synthesis for concurrent programs. In Proc. of CAV, LNCS 6806, pages 243–259. Springer, 2011.
  • [12] K. Chatterjee, M. Chmelik, R. Gupta, and A. Kanodia. Qualitative Analysis of POMDPs with Temporal Logic Specifications for Robotics Applications. ICRA, 2015.
  • [13] K. Chatterjee, M. Chmelik, R. Gupta, and A. Kanodia. Optimal Cost Almost-sure Reachability in POMDPs. In AI, 2016.
  • [14] K. Chatterjee, M. Chmelik, and J.Davies. A Symbolic SAT-based Algorithm for Almost-sure Reachability with Small Strategies in POMDPs. CoRR, abs/1511.08456 (AAAI 2016), 2015.
  • [15] K. Chatterjee, L. Doyen, and T. A. Henzinger. Qualitative analysis of partially-observable Markov decision processes. In MFCS, pages 258–269, 2010.
  • [16] K. Chatterjee, L. Doyen, T.A. Henzinger, and J.F. Raskin. Algorithms for omega-regular games with imperfect information. In CSL’06, pages 287–302. LNCS 4207, Springer, 2006.
  • [17] K. Chatterjee and M. Henzinger. Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In SODA. ACM-SIAM, 2011.
  • [18] K. Chatterjee and M. Henzinger. Efficient and dynamic algorithms for alternating Büchi games and maximal end-component decomposition. J. ACM, 61(3):15, 2014.
  • [19] K. Chatterjee, M. Jurdziński, and T.A. Henzinger. Simple stochastic parity games. In CSL’03, LNCS 2803, pages 100–113. Springer, 2003.
  • [20] K. Chatterjee, A. Kößler, and U. Schmid. Automated analysis of real-time scheduling using graph games. In HSCC’13, pages 163–172, 2013.
  • [21] A. Cimatti, M. Pistore, M. Roveri, and P. Traverso. Weak, strong, and strong cyclic planning via symbolic model checking. Artificial Intelligence, 147(1):35–84, 2003.
  • [22] K. Culik and J. Kari. Digital images and formal languages. Handbook of formal languages, pages 599–616, 1997.
  • [23] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological sequence analysis: probabilistic models of proteins and nucleic acids. Cambridge Univ. Press, 1998.
  • [24] R. Durrett. Probability: Theory and Examples (Second Edition). Duxbury Press, 1996.
  • [25] N. Eén and N. Sörensson. An extensible SAT-solver. In Theory and Applications of Satisfiability Testing, pages 502–518, 2003.
  • [26] J. Filar and K. Vrieze. Competitive Markov Decision Processes. Springer-Verlag, 1997.
  • [27] H. Howard. Dynamic Programming and Markov Processes. MIT Press, 1960.
  • [28] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic domains. Artif. Intell., 101(1):99–134, 1998.
  • [29] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. JAIR, 4:237–285, 1996.
  • [30] A. Kolobov, Mausam, and D.S. Weld. A theory of goal-oriented MDPs with dead ends. In UAI, pages 438–447, 2012.
  • [31] A. Kolobov, Mausam, D.S. Weld, and H. Geffner. Heuristic search for generalized stochastic shortest path MDPs. In ICAPS, 2011.
  • [32] M. L. Littman. Algorithms for Sequential Decision Making. PhD thesis, Brown University, 1996.
  • [33] M. L. Littman, A. R. Cassandra, and L. P Kaelbling. Learning policies for partially observable environments: Scaling up. In ICML, pages 362–370, 1995.
  • [34] O. Madani, S. Hanks, and A. Condon. On the undecidability of probabilistic planning and related stochastic optimization problems. Artif. Intell., 147(1-2):5–34, 2003.
  • [35] S. Maliah, R. Brafman, E. Karpas, and G. Shani. Partially observable online contingent planning using landmark heuristics. In ICAPS, 2014.
  • [36] A. Mehta, J. DelPreto, and D. Rus. Integrated codesign of printable robots. Journal of Mechanisms and Robotics, 7(2):021015, 2015.
  • [37] M. Mohri. Finite-state transducers in language and speech processing. Comp. Linguistics, 23(2):269–311, 1997.
  • [38] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of Markov decision processes. Mathematics of Operations Research, 12:441–450, 1987.
  • [39] A. Paz. Introduction to probabilistic automata (Computer science and applied mathematics). Academic Press, 1971.
  • [40] M. L. Puterman. Markov Decision Processes. John Wiley and Sons, 1994.
  • [41] J. Rintanen. Planning with SAT, admissible heuristics and A*. In IJCAI, pages 2015–2020, 2011.
  • [42] T. Smith and R. Simmons. Heuristic search value iteration for POMDPs. In UAI, pages 520–527. AUAI Press, 2004.
  • [43] M.T.J. Spaan. A point-based POMDP algorithm for robot planning. In ICRA, volume 3, pages 2399–2404. IEEE, 2004.
  • [44] M. Svorenova, M. Chmelik, K. Leahy, H. F. Eniser, K. Chatterjee, I. Cerna, and C. Belta. Temporal Logic Motion Planning using POMDPs with Parity Objectives. In HSCC, 2015.