I Introduction
Partially observable Markov decision processes (POMDPs) are a world model commonly used in artificial intelligence
kaelbling98 ; pineau03 ; russell03 ; spaan05 ; smith04 . POMDPs model an agent acting in a world of discrete states. The world is always in exactly one state, but the agent is not told this state. Instead, it can take actions and receive observations about the world. The actions an agent takes are nondeterministic; before taking an action, the agent knows only the probability distribution of its next state given the current state. Similarly, an observation does not give the agent direct knowledge of the current world state, but the agent knows the probability of receiving a given observation in each possible state. The agent is rewarded for the actual, unknown world state at each time step, but, although it knows the reward model, it is not told the reward it received. POMDPs are often used to model robots, because robot sensors and actuators give them a very limited understanding of their environment.As we will discuss further in Section II, an agent can maximize future expected reward in a POMDP by maintaining a probability distribution, known as a belief state, over the world’s current state. By carefully updating this belief state after every action and observation, the agent can ensure that its belief state reflects the correct probability that the world is in each possible state. The agent can make decisions using only its belief about the state without ever needing to reason more directly about the actual world state.
In this paper, we introduce and study “quantum observable Markov decision processes” (QOMDPs). A QOMDP is similar in spirit to a POMDP, but allows the belief state to be a quantum state (superposition or mixed state) rather than a simple probability distribution. We represent the action and observation process jointly as a superoperator. POMDPs are then just the special case of QOMDPs where the quantum state is always diagonal in some fixed basis.
Although QOMDPs are the quantum analogue of POMDPs, they have different computability properties. Our main result, in this paper, is that there exists a decision problem (namely, goal state reachability) that is computable for POMDPs but uncomputable for QOMDPs.
One motivation for studying QOMDPs is simply that they’re the natural quantum generalizations of POMDPs, which are central objects of study in AI. Moreover, as we show here, QOMDPs have different computability properties than POMDPs, so the generalization is not an empty one. Beyond this conceptual motivation, though, QOMDPs might also find applications in quantum control and quantum faulttolerance. For example, the general problem of controlling a noisy quantum system, given a discrete “library” of noisy gates and measurements, in order to manipulate the system to a desired end state, can be formulated as a QOMDP. Indeed, the very fact that POMDPs have turned out to be such a useful abstraction for modeling classical robots, suggests that QOMDPs would likewise be useful for modeling control systems that operate at the quantum scale. At any rate, this seems like sufficient reason to investigate the complexity and computability properties of QOMDPs, yet we know of no previous work in that direction. This paper represents a first step.
Let us mention that soon after an earlier version of this paper was submitted here and posted on arXiv barry we were provided a manuscript by another group engaged in simultaneous work, Ying and Ying ying . They considered quantum Markov decision processes (MDPs), and proved undecidability results for them that are very closely related to our results. In particular, these authors show that the finitehorizon reachability problem for quantum MDPs is undecidable, and they also do so via a reduction from the matrix mortality problem. Ying and Ying also prove hardness and uncomputability for the infinitehorizon case (depending on whether one is interested in reachability with probability or with probability , respectively). On the other hand, they give an algorithm that decides, given a quantum MDP and an invariant subspace , whether or not there exists a policy that reaches with probability regardless of the initial state; and they prove several other results about invariant subspaces in MDPs. These results nicely extend and complement ours as well as previous work by the same group ying01
One possible advantage of the present work is that, rather than considering (fullyobservable) MDPs, we consider POMDPs. The latter seem to us like a more natural starting point than MDPs for a quantum treatment, because there is never “full observability” in quantum mechanics. Many results, including the undecidability results mentioned above, can be translated between the MDP and POMDP settings, by the simple expedient of considering ‘memoryful’ MDP policies: that is, policies that remember the initial state, as well as all actions performed so far and all measurement outcomes obtained. Such knowledge is tantamount to knowing the system’s current quantum state . However, because we consider POMDPs, which by definition can take actions that depend on , we never even need to deal with the issue of memory. A second advantage of this work is that we explicitly compare the quantum against the classical case (something not done in ying ), showing why the same problem is undecidable in the former case but decidable in the latter
Ii Partially Observable Markov Decision Processes (POMDPs)
For completeness, in this section we give an overview of Markov decision processes and POMDPs.
ii.1 Fully Observable Case
We begin by defining fully observable Markov decision processes (MDPs). This will facilitate our discussion of POMDPs because POMDPs can be reduced to continuousstate MDPs. For more details, see Russell and Norvig, Chapter 17 russell03 .
A Markov Decision Process (MDP) is a model of an agent acting in an uncertain but observable world. An MDP is a tuple consisting of a set of states , a set of actions , a state transition function giving the probability that taking action in state results in state , a reward function giving the reward of taking action in state , and a discount factor that discounts the importance of reward gained later in time. At each time step, the world is in exactly one, known state, and the agent chooses to take a single action, which transitions the world to a new state according to . The objective is for the agent to act in such a way as to maximize future expected reward.
The solution to an MDP is a policy. A policy is a function mapping states at time to actions. The value of a policy at state over horizon is the future expected reward of acting according to for time steps:
(1) 
The solution to an MDP of horizon is the optimal policy that maximizes future expected reward over horizon . The associated decision problem is the policy existence problem:
Definition 1 (Policy Existence Problem): The policy existence problem is to decide, given a decision process , a starting state , horizon , and value , whether there is a policy of horizon that achieves value at least for in .
For MDPs, we will evaluate the infinite horizon case. In this case, we will drop the time argument from the policy since it does not matter; the optimal policy at time infinity is the same as the optimal policy at time infinity minus one. The optimal policy over an infinite horizon is the one inducing the value function
(2) 
Equation 2 is called the Bellman equation, and there is a unique solution for russell03 . Note that is noninfinite if . When the input size is polynomial in and , finding an optimal policy for an MDP can be done in polynomial time russell03 .
A derivative of the MDP of interest to us is the goal MDP. A goal MDP is a tuple where , , and are as before and is an absorbing goal state so for all . The objective in a goal MDP is to find the policy that reaches the goal with the highest probability. The associated decision problem is the GoalState Reachability Problem:
Definition 2 (GoalState Reachability Problem for Decision Processes): The goalstate reachability problem is to decide, given a goal decision process and starting state , whether there exists a policy that can reach the goal state from in a finite number of steps with probability .
When solving goal decision processes, we never need to consider timedependent policies because nothing changes with the passing of time. Therefore, when analyzing the goalstate reachability problem, we will only consider stationary policies that depend solely upon the current state.
ii.2 Partially Observable Case
A partially observable Markov decision process (POMDP) generalizes an MDP to the case where the world is not fully observable. We follow the work of Kaelbling et al. kaelbling98 in explaining POMDPs.
In a partially observable world, the agent does not know the state of the world but receives information about it in the form of observations. Formally, a POMDP is a tuple where is a set of states, is a set of actions, is a set of observations, is the probability of transitioning to state given that action was taken in state , is the reward for taking action in state , is the probability of making observation given that action was taken and ended in state , is a probability distribution over possible initial states, and is the discount factor.
In a POMDP the world state is “hidden”, meaning that the agent does not know the world state, but the dynamics of the world behave according to the actual underlying state. At each time step, the agent chooses an action, the world transitions to a new state according to its current, hidden state and , and the agent receives an observation according to the world state after the transition and . As with MDPs, the goal is to maximize future expected reward.
POMDPs induce a belief MDP. A belief state is a probability distribution over possible world states. For , is the probability that the world is in state . Since is a probability distribution, and . If the agent has belief state , takes action , and receives observation the agent’s new belief state is
(3) 
This is the belief update equation. is independent of and usually just computed afterwards as a normalizing factor that causes to sum to . We define the matrix
(4) 
The belief update for seeing observation after taking action is
(5) 
where is the norm. The probability of transitioning from belief state to belief state when taking action is
(6) 
where
The expected reward of taking action in belief state is
(7) 
Now the agent always knows its belief state so the belief space is fully observable. This means we can define the belief MDP where is the set of all possible belief states. The optimal solution to the MDP is also the optimal solution to the POMDP. The problem is that the state space of the belief state MDP is continuous, and all known algorithms for solving MDPs optimally in polynomial time are polynomial in the size of the state space. It was shown in 1987 that the policy existence problem for POMDPs is hard papadimitriou87 . If the horizon is polynomial in the size of the input, the policy existence problem is in kaelbling98 . The policy existence problem for POMDPs in the infinite horizon case, however, is undecidable madani99 .
A goal POMDP is a tuple where , , , , and are defined as before but instead of a reward function, we assume that is a goal state. This state is absorbing so we are promised that for all , that . Moreover, the agent receives an observation telling it that it has reached the goal so for all , . This observation is only received in the goal state so for all , and all , . The solution to a goal POMDP is a policy that reaches the goal state with the highest possible probability starting from .
We will show that because the goal is absorbing and known, the observable belief space corresponding to a goal POMDP is a goal MDP . Here is the state in which the agent knows it is in with probability . We show that this state is absorbing. Firstly the probability of observing after taking action is
Therefore, if the agent has belief , regardless of the action taken, the agent sees observation . Assume the agent takes action and sees observation . The next belief state is
Therefore, regardless of the action taken, the next belief state is so this is a goal MDP.
Iii Quantum Observable Markov Decision Processes (QOMDPs)
A quantum observable Markov decision process (QOMDP) generalizes a POMDP by using quantum states rather than belief states. In a QOMDP, an agent can apply a set of possible operations to a dimensional quantum system. The operations each have possible outcomes. At each time step, the agent receives an observation corresponding to the outcome of the previous operation and can choose another operation to apply. The reward the agent receives is the expected value of some operator in the system’s current quantum state.
iii.1 QOMDP Formulation
A QOMDP uses superoperators to express both actions and observations. A quantum superoperator acting on states of dimension is defined by Kraus matrices ^{1}^{1}1Actually, the quantum operator acts on a product state of which the first dimension is . In order to create quantum states of dimension probabilistically, the superoperator entangles the possible next states with a measurement register and then measures that register. Thus the operator actually acts on the higherdimensional product space, but for the purposes of this discussion, we can treat it as an operator that probabilistically maps states of dimension to states of dimension . superoperator . A set of matrices of dimension is a set of Kraus matrices if and only if
(8) 
If operates on a density matrix , there are possible next states for . Specifically the next state is
(9) 
with probability
(10) 
The superoperator returns observation if the Kraus matrix was applied.
We can now define the quantum observable Markov decision process (QOMDP). Definition 3 (QOMDP): A QOMDP is a tuple where

is a Hilbert space. We allow pure and mixed quantum states so we will represent states in as density matrices.

is a set of possible observations.

is a set of superoperators. Each superoperator has
Kraus matrices. Note that each superoperator returns the same set of possible observations; if this is not true in reality, some of the Kraus matrices may be the all zeroes matrix. The return of
indicates the application of the th Kraus matrix so taking action in state returns observation with probability(11) If is observed after taking action in state , the next state is
(12) 
is a set of operators. The reward associated with taking action in state is the expected value of operator on ,
(13) 
is a discount factor.

is the starting state.
A QOMDP is fully observable in the same sense that the belief state MDP for a POMDP is fully observable. Just as the agent in a POMDP always knows its belief state, the agent in a QOMDP always knows the current quantum superposition or mixed state of the system. In a POMDP, the agent can update its belief state when it takes an action and receives an observation using equation 5. Similarly, in a QOMDP, the agent can keep track of the quantum state using equation 12 each time it takes an action and receives an observation. Note that a QOMDP is much more analogous to the belief state MDP of a POMDP than to the POMDP itself. In a POMDP, the system is always in one, actual underlying world state that is simply unknown to the agent; in a QOMDP, the system can be in a superposition state for which no underlying “real” state exists.
As with MDPs, a policy for a QOMDP is a function mapping states at time to actions. The value of the policy over horizon starting from state is
Let be the policy at time . Then
(14) 
where , , and are defined by equations 11, 12, and 13 respectively. The Bellman equation (equation 2) still holds using these definitions.
A goal QOMDP is a tuple where , , , and are as defined above. The goal state must be absorbing so that for all and all if then
As with goal MDPs and POMDPs, the objective for a goal QOMDP is to maximize the probability of reaching the goal state.
iii.2 QOMDP Policy Existence Complexity
As we can always simulate classical evolution with a quantum system, the definition of QOMDPs contains POMDPs. Therefore we immediately find that the policy existence problem for QOMDPs in the infinite horizon case is undecidable. We also find that the polynomial horizon case is hard. We can, in fact, prove that the polynomial horizon case is in .
Theorem 1: The policy existence problem (Definition II.1) for QOMDPs with a polynomial horizon is in .
Proof: Papadimitriou and Tsitsiklis papadimitriou87 showed that polynomial horizon POMDPs are in and the proof still holds for QOMDPs with the appropriate substitution for the calculations of the probability of an observation given a quantum state and action [Eq. 11], [Eq. 12], and [Eq. 13], all of which can clearly be done in when the horizon is polynomial.
Iv A Computability Separation in GoalState Reachability
However, although the policy existence problem has the same complexity for QOMDPs and POMDPs, we can show that the goalstate reachability problem (Definition II.1) is decidable for goal POMDPs but undecidable for goal QOMDPs.
iv.1 Undecidability of GoalState Reachability for QOMDPs
We will show that the goalstate reachability problem is undecidable for QOMDPs by showing that we can reduce the quantum measurement occurrence problem proposed by Eisert et al. eisert12 to it.
Definition 4 (Quantum Measurement Occurrence Problem): The quantum measurement occurrence problem (QMOP) is to decide, given a quantum superoperator described by Kraus operators , whether there is some finite sequence such that .
The setting for this problem is shown in Figure 1. We assume that the system starts in state . This state is fed into . We then take the output of acting on and feed that again into and iterate. QMOP is equivalent to asking whether there is some finite sequence of observations that can never occur even if is full rank. We will reduce from the version of the problem given in Definition IV.1, but will use the language of measurement occurrence to provide intuition.
Theorem 2 (Undecidability of QMOP): The quantum measurement occurrence problem is undecidable.
Proof: This can be shown using a reduction from the matrix mortality problem. For the full proof see Eisert et al eisert12 .
We first describe a method for creating a goal QOMDP from an instance of QMOP. The main ideas behind the choices we make here are shown in Figure 2.
Definition 5 (QMOP Goal QOMDP): Given an instance of QMOP with superoperator and Kraus matrices of dimension , we create a goal QOMDP as follows:

is dimensional Hilbert space.

is a set of possible observations. Observations through correspond to AtGoal while is NotAtGoal.

is a set of superoperators each with Kraus matrices each of dimension . We set
(15) the th Kraus matrix from the QMOP superoperator with the st column and row all zeros. Additionally, let
(16) (17) (18) Now and the sum of Hermitian matrices is Hermitian so is Hermitian. Moreover, is positive semidefinite, and positive semidefinite matrices are closed under positive addition, so is positive semidefinite as well. Let an orthonormal eigendecomposition of be
Since is a positive semidefinite Hermitian matrix, is nonnegative and real so is also real. We let for be the matrix in which the first rows are all 0s and the bottom row is :
(Note that if then is the allzero matrix, but it is cleaner to allow each action to have the same number of Kraus matrices.)

is the maximally mixed state .

is the state .
The intuition behind the definition of is shown in Figure 2. Although each action actually has choices, we will show that of those choices (every one except ) always transition to the goal state. Therefore action really only provides two possibilities:

Transition to goal state.

Evolve according to .
Our proof will proceed as follows: Consider choosing some sequence of actions . The probability that the system transitions to the goal state is the same as the probability that it does not evolve according to first then etc. Therefore, the system transitions to the goal state with probability if and only if it is impossible for it to transition according to first then etc. Thus in the original problem, it must have been impossible to see the observation sequence . In other words, the agent can reach a goal state with probability if and only if there is some sequence of observations in the QMOP instance that can never occur. Therefore we can use goalstate reachability in QOMDPs to solve QMOP, giving us that goalstate reachability for QOMDPs must be undecidable.
We now formalize the sketch we just gave. Before we can do anything else, we must show that is in fact a goal QOMDP. We start by showing that is absorbing in two lemmas. In the first, we prove that transitions all density matrices to the goal state. In the second, we show that has zero probability of evolving according to .
Lemma 3: Let with Kraus matrices of dimension be the superoperator from an instance of QMOP and let be the corresponding goal QOMDP. For any density matrix , if is the Kraus matrix of the action of and then
Proof: Consider
(19)  
(20)  
(21) 
so only the lower right element of this matrix is nonzero. Thus dividing by the trace gives
(22) 
Lemma 4: Let be the superoperator from an instance of QMOP and let be the corresponding QOMDP. Then is absorbing.
Proof: By Lemma 2, we know that for , we have
Here we show that so that the probability of applying is . We have:
(23)  
(24)  
(25) 
since the column of is all zeros by construction. Therefore, is absorbing.
Now we are ready to show that is a goal QOMDP.
Theorem 5: Let be the superoperator from an instance of QMOP with Kraus matrices of dimension . Then is a goal QOMDP.
Proof: We showed in Lemma 2 that is absorbing, so all that remains to show is that the actions are superoperators. Let be the Kraus matrix of action . If then
(26)  
(27)  
(28)  
(29) 
where we have used that because is real. Thus for
Now
(30)  
(31)  
(32) 
Therefore is a set of Kraus matrices.
Now we want to show that the probability of not reaching a goal state after taking actions is the same as the probability of observing the sequence . However, before we can do that, we must take a short detour to show that the fact that the goalstate reachability problem is defined for statedependent policies does not give it any advantage. Technically, a policy for a QOMDP is not timedependent but statedependent. The QMOP problem is essentially timedependent: we want to know about a specific sequence of observations over time. A QOMDP policy, however, is statedependent: the choice of action depends not upon the number of time steps, but upon the current state. When reducing a QMOP problem to a QOMDP problem, we need to ensure that the observations received in the QOMDP are dependent on time in the same way that they are in the QMOP instance. We will be able to do this because we have designed the QOMDP to which we reduce a QMOP instance such that after time steps there is at most one possible nongoal state for the system. The existence of such a state and the exact state that is reachable depends upon the policy chosen, but regardless of the policy, there will be at most one. This fact, which we will prove in the following lemma, allows us to consider the policy for these QOMDPs as timedependent: the action the timedependent policy chooses at time step is the action the statedependent policy chooses for the only nongoal state the system could possibly reach at time .
Lemma 6: Let with Kraus matrices of dimension be the superoperator from an instance of QMOP and let be the corresponding goal QOMDP. Let be any policy for . There is always at most one state such that .
Proof: We proceed by induction on .
Base Case : After time step, the agent has taken a single action, . Lemma 2 gives us that there is only a single possible state besides after the application of this action.
Induction Step: Let be the state on the time step and let be the state on the time step. Assume that there are only two possible choices for : and . If , then regardless of . If , the agent takes action . By Lemma 2 there is only a single possible state besides after the application of .
Thus in a goal QOMDP created from a QMOP instance, the statedependent policy can be considered a “sequence of actions” by looking at the actions it will apply to each possible nongoal state in order.
Definition 6 (Policy Path): Let with Kraus matrices of dimension be the superoperator from a QMOP instance and let be the corresponding goal QOMDP. For any policy let be the nongoal state with nonzero probability after time steps of following if it exists. Otherwise let . Choose . The sequence is the policy path for policy . By Lemma 2, this sequence is unique so this is welldefined.
We have one more technical problem we need to address before we can look at how states evolve under policies in a goal QOMDP. When we created the goal QOMDP, we added a dimension to the Hilbert space so that we could have a defined goal state. We need to show that we can consider only the upperleft matrices when looking at evolution probabilities.
Lemma 7: Let with Kraus matrices of dimension be the superoperator from a QMOP instance and let be the corresponding goal QOMDP. Let be any matrix and be the upper left matrix in which the column and row of have been removed. Then for any action ,
Proof: We consider the multiplication elementwise:
(33)  
(34) 
where we have used that the column of is to limit the sum. Additionally, if or , the sum is because the row of is . Assume that and . Then
(35) 
Thus
(36) 
We are now ready to show that any path that does not terminate in the goal state in the goal QOMDP corresponds to some possible path through the superoperator in the QMOP instance.
Lemma 8: Let with Kraus matrices of dimension be the superoperator from a QMOP instance and let be the corresponding goal QOMDP. Let be any policy for and let be the policy path for . Assume . Then
Proof: We proceed by induction on .
Comments
There are no comments yet.