1 Introduction
Reinforcement Learning (RL) is an effective approach to solve the problem of sequential decision–making under uncertainty. RL agents learn how to maximize longterm reward using the experience obtained by direct interaction with a stochastic environment (Sutton and Barto, 1998). Since the environment is initially unknown, the agent has to balance between exploring the environment to estimate its structure, and exploiting the estimates to compute a policy that maximizes the longterm reward. As a result, designing a RL algorithm requires three different elements: 1) an estimator for the environment’s structure, 2) a planning algorithm to compute the optimal policy of the estimated environment (LaValle, 2006), and 3) a strategy to make a trade off between exploration and exploitation to minimize the regret, i.e., the difference between the performance of the exact optimal policy and the rewards accumulated by the agent over time.
Most of RL literature assumes that the environment can be modeled as a Markov decision process (MDP), with a Markovian state evolution that is fully observed. A number of exploration–exploitation strategies have been shown to have strong performance guarantees for MDPs, either in terms of regret or sample complexity (see Sect. 1.2 for a review). However, the assumption of full observability of the state evolution is often violated in practice, and the agent may only have noisy observations of the true state of the environment (e.g., noisy sensors in robotics). In this case, it is more appropriate to use the partiallyobservable MDP or POMDP (Sondik, 1971) model.
Many challenges arise in designing RL algorithms for POMDPs. Unlike in MDPs, the estimation problem (element 1) involves identifying the parameters of a latent variable model (LVM). In an MDP the agent directly observes (stochastic) state transitions, and the estimation of the generative model is straightforward via empirical estimators. On the other hand, in a POMDP the transition and reward models must be inferred from noisy observations and the Markovian state evolution is hidden. The planning problem (element 2), i.e., computing the optimal policy for a POMDP with known parameters, is PSPACEcomplete (Papadimitriou and Tsitsiklis, 1987), and it requires solving an augmented MDP built on a continuous belief space (i.e., a distribution over the hidden state of the POMDP). Finally, integrating estimation and planning in an exploration–exploitation strategy (element 3) with guarantees is nontrivial and no noregret strategies are currently known (see Sect. 1.2). To handle these challenges, we build up the results in this paper on the top of the previous paper Azizzadenesheli et al. (2016b) on RL of POMDPs.
1.1 Summary of Results
The main contributions of this paper are as follows: We propose a new RL algorithm for POMDPs that incorporates spectral parameter estimation within a explorationexploitation framework. Then we apply this algorithm on a grid world atari game and compare its performance with state of the art Deep Q Learning (DQN) Mnih et al. (2013). We show that when the underlying model is not , model based algorithms learn wrong model representation and model free algorithms, learn Qfunction on observation set which does not carry the markovian property anymore. Furthermore, because of nonmarkovianity on observation set, current observation is not sufficient statistic for policy and sort of memory is required. Assume the game which has one green and one red apples. At the beginning of the game, the emulator reveals a flag which shows which apple has positive reward and which one has negative reward. In this case, the MDP based learner forgets the flag and always suffers regret linear regret.
In this paper, the estimation of the POMDP is carried out via spectral methods which involve decomposition of certain moment tensors computed from data. This learning algorithm is interleaved with the optimization of the planning policy using an exploration–exploitation strategy inspired by the
UCRL method for MDPs (Jaksch et al., 2010). The resulting algorithm, called SMUCRL (Spectral Method for UpperConfidence Reinforcement Learning), runs through epochs of variable length, where the agent follows a fixed policy until enough data are collected and then it updates the current policy according to the estimates of the POMDP parameters and their accuracy.We derive a regret bound with respect to the best memoryless (stochastic) policy for the given POMDP. Indeed, for a general POMDP, the optimal policy need not be memoryless. However, finding the optimal policy is uncomputable for infinite horizon regret minimization (Madani, 1998). Instead memoryless policies have shown good performance in practice (see the Section on related work). Moreover, for the class of socalled contextual MDP, a special class of POMDPs, the optimal policy is also memoryless (Krishnamurthy et al., 2016).
1.2 Related Work
In last few decades, MDP has been widely studied (Kearns and Singh, 2002; Brafman and Tennenholtz, 2003; Jaksch et al., 2010) in different setting. While RL in MDPs has been widely studied, the design of effective exploration–exploration strategies in POMDPs is still relatively unexplored. Ross et al. (2007) and Poupart and Vlassis (2008) propose to integrate the problem of estimating the belief state into a modelbased Bayesian RL approach, where a distribution over possible MDPs is updated over time. An alternative to modelbased approaches is to adapt modelfree algorithms, such as Qlearning, to the case of POMDPs. Perkins (2002) proposes a MonteCarlo approach to actionvalue estimation and it shows convergence to locally optimal memoryless policies. An alternative approach to solve POMDPs is to use policy search methods, which avoid estimating value functions and directly optimize the performance by searching in a given policy space, which usually contains memoryless policies (see e.g., (Ng and Jordan, 2000)). Beside its practical success in offline problems, policy search has been successfully integrated with efficient exploration–exploitation techniques and shown to achieve small regret (GheshlaghiAzar et al., 2013). Nonetheless, the performance of such methods is severely constrained by the choice of the policy space, which may not contain policies with good performance.
Matrix decomposition methods have been previously used in the more general setting of predictive state representation (PSRs) (Boots et al., 2011) to reconstruct the structure of the dynamical system. Despite the generality of PSRs, the proposed model relies on strong assumptions on the dynamics of the system and it does not have any theoretical guarantee about its performance. Recently, (Hamilton et al., 2014) introduced compressed PSR (CPSR) method to reduce the computation cost in PSR by exploiting the advantages in dimensionality reduction, incremental matrix decomposition, and compressed sensing. In this work, we take these ideas further by considering more powerful tensor decomposition techniques.
In last few decades, latent variable models have become popular model for the problems with partially observable variables. Traditional methods such as ExpectationMaximization (EM) and variational methods have been used to learn the hidden structure of the model but usually they have no consistency guarantees, they are computationally massive, and mostly converge to local optimum which can be arbitrarily bad. To over come these drawbacks, spectral methods have been used for consistent estimation of a wide class of LVMs
Anandkumar et al. (2012), Anandkumar et al. (2014), the theoretical guarantee and computation complexity by using robust tensor power method are well studied in Song et al. (2013) and Wang et al. (2015). Today, spectral methods and tensor decomposition methods are well known as a credible alternative for EM and variational methods for inferring the latent structure of the model. These method have been shown to be efficient in learning of Gaussian mixture models, topic modeling, Latent Dirichlet Allocation, Hidden markov model, etc.
2 Preliminaries
A POMDP is a tuple , where is a finite state space with cardinality , is a finite action space with cardinality , is a finite observation space with cardinality
(the vector representation is w.r.t onehot encoding), and
is a finite reward space with cardinality and largest reward . Finally, denotes the transition density, so thatis the probability of transition to
given the stateaction pair , is mean reward at state and action . Furthermore, is the observation density, so that is the probability of receiving the observation in corresponding to the indicator vector given the state . Whenever convenient, we use tensor forms for the density functions such thatSuch that and . We also denote by the fiber (vector) in obtained by fixing the arrival state and action and by the transition matrix between states when using action . The graphical model associated to the POMDP is illustrated in Fig. 1.
A policy is stochastic mapping from observations to actions and for any policy we denote by its density function. We denote by the set of all stochastic memoryless policies. Acting according to a policy in a POMDP
defines a Markov chain characterized by a transition density
and a stationary distribution over states such that . The expected average reward performance of a policy is
where is the expected reward of executing policy in state defined as
The best stochastic memoryless policy in is and we denote by its average reward.^{1}^{1}1We use rather than to recall the fact that we restrict the attention to and the actual optimal policy for a POMDP in general should be constructed on the beliefMDP.
3 Learning the Parameters of the POMDP
In this section we introduce a novel spectral method to estimate the POMDP parameters , , and . A stochastic policy is used to generate a trajectory of steps. Similar to the case of HMMs, the key element to apply the spectral methods is to construct a multiview model for the hidden states. Despite its similarity, the spectral method developed for HMM by Anandkumar et al. (2014) cannot be directly employed here. In fact, in HMMs the state transition and the observations only depend on the current state. On the other hand, in POMDPs the probability of a transition to state not only depends on , but also on action . Since the action is chosen according to a memoryless policy based on the current observation, this creates an indirect dependency of on observation , which makes the model more intricate.
3.1 The multiview model
We estimate POMDP parameters for each action separately. Let be a step at which , we construct three views , , and which all contain observable elements. As it can be seen in Fig. 1, all three views provide some information about the hidden state (e.g., the observation triggers the action , which influence the transition to ). A careful analysis of the graph of dependencies shows that conditionally on all the views are independent. For instance, let us consider and
. These two random variables are clearly dependent since
influences action , which triggers a transition to that emits an observation . Nonetheless, it is sufficient to condition on the action to break the dependency and make and independent. Similar arguments hold for all the other elements in the views, which can be used to recover the latent variable . More formally, we encode the triple into a vector , so that view whenever , , and for a suitable mapping between the index and the indices of the action, observation, and reward. Similarly, we proceed for and . We introduce the three view matrices with associated with action defined as , , and such thatAt the end, let’s concatinat the reward of time and to the end of vectors and which ends up to one extra row at the bottom of and . By simple manipulation, one can efficiently extract the model parameters out of , and for
1.1
Empirical estimates of POMDP parameters.
In practice, and are not available and need to be estimated from samples. Given a trajectory of steps obtained executing policy , let be the set of steps when action is played, then we collect all the triples , and for any and construct the corresponding views , , . Then we symmetrize the views. Given the resulting and , we apply the spectral tensor decomposition method to recover empirical estimates of second and third views , . Thereafter, a simple manipulation results in the model parameters. The overall method is summarized in Alg. 1. The empirical estimates of the POMDP parameters enjoy the following guarantee.
Theorem 1 (Learning Parameters)
Let , , and be the estimated POMDP models using a trajectory of steps. we have
(1) 
(2) 
with probability (w.r.t. the randomness in the transitions, observations, and policy), where , , and are numerical constants.
Remark 1 (consistency and dimensionality).
All previous errors decrease with a rate
, showing the consistency of the spectral method, so that if all the actions are repeatedly tried over time, the estimates converge to the true parameters of the POMDP. This is in contrast with EMbased methods which typically get stuck in local maxima and return biased estimators, thus preventing from deriving confidence intervals.
4 Spectral Ucrl
The most interesting aspect of the estimation process illustrated in the previous section is that it can be applied when samples are collected using any policy in the set . As a result, it can be integrated into any explorationexploitation strategy where the policy changes over time in the attempt of minimizing the regret.
The algorithm.
The SMUCRL algorithm illustrated in Alg. 2 is the result of the integration of the spectral method into a structure similar to UCRL (Jaksch et al., 2010) designed to optimize the explorationexploitation tradeoff. The learning process is split into episodes of increasing length. At the beginning of each episode (the first episode is used to initialize the variables), an estimated POMDP is computed using the spectral method of Alg. 1.
Given the estimated POMDP and the result of Thm. 1, we construct the set of admissible POMDPs whose transition, reward, and observation models belong to the confidence intervals and compute the optimal policy with respect to optimistic model as follows
(3) 
The choice of using the optimistic POMDP guarantees the explores more often actions corresponding to large confidence intervals, thus contributing to improve the estimates over time. After computing the optimistic policy, is executed until the number of samples for one action is doubled, i.e., . This stopping criterion avoids switching policies too often and it guarantees that when an epoch is terminated, enough samples are collected to compute a new (better) policy. This process is then repeated over epochs and we expect the optimistic policy to get progressively closer to the best policy as the estimates of the POMDP get more and more accurate.
Regret analysis.
We now study the regret SMUCRL w.r.t. the best policy in . Given an horizon of steps, the regret is defined as
(4) 
where is the random reward obtained at time over the states traversed by the policies performed over epochs on the actual POMDP. To restate, the complexity of learning in a POMDP is partially determined by its diameter, defined as
(5) 
which corresponds to the expected passing time from a state to a state starting with action and terminating with action and following the most effective memoryless policy . Now for result we have the following theorem.
Theorem 2 (Regret Bound)
Consider a POMDP with states, actions, observations, rewards, characterized by a diameter . If SMUCRL is run over steps and the confidence intervals of Thm. 1 are used with in constructing the plausible POMDPs , then the total regret
(6) 
with probability , where is numerical constants
5 Experiments
In the following section, we show how SMUCRL algorithm outperforms other wellknown methods in both synthetic environment and simple computer game.
5.1 Synthetic Environment
In this subsection, we illustrate the performance of our method on a simple synthetic environment which follows a POMDP structure with , , , , and . We find that spectral learning method quickly learn model parameters Fig. [2]. Estimation of the transition tensor takes more effort compared to estimation of observation matrix and reward matrix due to the fact that the transition tensor is estimated given estimated matrix which adds up more error. For planning, given POMDP model parameters, we use alternating maximization method to find the memoryless policy. This method, iteratively, alternates between updates of the policy and the stationary distribution which ends up to stationary point of the optimization problem. We find that, in practice, this method converges to a reasonably good solution (Azizzadenesheli et al. (2016a) shows the planing is NPhard in general). The resulting regret bounds are shown in Fig. [2]. We compare against the following algorithm: (1) baseline random policies which simply selects random actions without looking at the observed data, (2) UCRLMDP Auer et al. (2009) which attempts to fit a MDP model to the observed data and runs the UCRL policy, and (3) QLearning Watkins and Dayan (1992) which is a modelfree method that updates policy based on the Qfunction. We find that our method converges much faster. In addition, we show that it converges to a much better policy (stochastic). Note that the MDPbased policies UCRLMDP and QLearning perform very poorly, and are even worse than a random policy and are too far from SMUCRL policy. This is due to model misspecification and dealing with larger state space.
5.2 Simple Atari Game
In the following, we provide empirical guarantees for SMUCRL on environment of simple computer Game. In this game, Figs. [3], the environment is grid world and has five sweet (green) apple and five poisonous apples (red). The environment uniformly spread out the apples. In addition, each of these apples lasts uniformly for time steps and disappear or they get eaten by the agent. In this game, the agent interacts with this environment. We study two settings, the has set of possible actions ((N,W,S,E), set of actions (N,NW,W,SW,S,SE,E,NE)). At each time step the agent chooses an action and deterministically moves one step in that direction. If there is a sweet apple at the new location, the agent will score up by one, and score down by one if it is poisonous one. In this game, at each time step, the agent just partially observes the environment, Fig. [3,a] one single box above of the agent is visible to her, and Fig. [3,b] three boxes above of her are observable. The randomness on rewarding process and partial observability bring the notion of hidden structure and pushes the environment more to be POMPDs models rather than MDPs.
For single box observable setting, the observation set has cardinality of 4, (wall, sweat apple, poisonous apple, nothing). We tune SMUCRL with (X=3), since it is not known ( we add minimum level of stochasticity to the policy when the policy suggests deterministic mapping to an action). In addition, we apply DQN (Deep QNetwork) Mnih et al. (2013) with three hidden layers and 10 hidden units of hyperbolic tangentactivation functions in each hidden layer . For back propagation, we use RMSProp method which has been shown to be robust and stable. Figs. [4] shows the performance of both SMUCRL and DQN (DNN) for both case when action set is (N,W,S,E) and (N,NW,W,SW,S,SE,E,NE). We show that not only SMUCRL captures the environment behavior faster but also reaches to the better long term average reward. We run DQN couple of times and represent the average performance as it is shown in Figs. [4,4]. DQN some times traps in some local minima and it results in bad performance which degrades its average performance.
In other setting, when three boxes are observable Fig. [3,b], the observation set has cardinality of 64 (four possible observation for each of these three boxes). We tune the SMUCRL with (X=8) and apply it on this environment. Again, we show that SMUCRL outperforms DQN with same structure except 30 hidden units in each hidden layer. During the implementation, we observed that SMUCRL does not need to estimate the model parameter very well to get to reasonable policy. It comes up with the stochastic and reasonably good policy even from the beginning. On the other hand, we observed that the policy makes balance between moving upward, downward and makes a good balance between moving rightward and leftward in order to keep the agent away from the walls. It helps the agent to collect more reward and move around the area far from the walls.(link:\(https://newport.eecs.uci.edu/anandkumar/pubs/SMvsDQN.flv\))
6 Conclusion
We introduced a novel RL algorithm for POMDPs which relies on a spectral method to consistently identify the parameters of the POMDP and an optimistic approach for the solution of the exploration–exploitation problem. For the resulting algorithm we derive confidence intervals on the parameters and a minimax optimal bound for the regret.
This work opens several interesting directions for future development. POMDP is a special case of the predictive state representation (PSR) model Littman et al. (2001), which allows representing more sophisticated dynamical systems. Given the spectral method developed in this paper, a natural extension is to apply it to the more general PSR model and integrate it with an exploration–exploitation algorithm to achieve bounded regret. As long as POMDPs are more suitable models for most of real world applications compared to MDP, the further experimental analyses are interesting.
References

Anandkumar et al. (2014)
Anandkumar, A., Ge, R., Hsu, D., Kakade, S. M., and Telgarsky, M. (2014).
Tensor decompositions for learning latent variable models.
The Journal of Machine Learning Research
, 15(1):2773–2832.  Anandkumar et al. (2012) Anandkumar, A., Hsu, D., and Kakade, S. M. (2012). A method of moments for mixture models and hidden markov models. arXiv preprint arXiv:1203.0683.
 Auer et al. (2009) Auer, P., Jaksch, T., and Ortner, R. (2009). Nearoptimal regret bounds for reinforcement learning. In Advances in neural information processing systems, pages 89–96.
 Azizzadenesheli et al. (2016a) Azizzadenesheli, K., Lazaric, A., and Anandkumar, A. (2016a). Open problem: Approximate planning of pomdps in the class of memoryless policies. arXiv preprint arXiv:1608.04996.
 Azizzadenesheli et al. (2016b) Azizzadenesheli, K., Lazaric, A., and Anandkumar, A. (2016b). Reinforcement learning of pomdps using spectral methods. arXiv preprint arXiv:1602.07764.
 Boots et al. (2011) Boots, B., Siddiqi, S. M., and Gordon, G. J. (2011). Closing the learningplanning loop with predictive state representations. The International Journal of Robotics Research, 30(7):954–966.
 Brafman and Tennenholtz (2003) Brafman, R. I. and Tennenholtz, M. (2003). Rmaxa general polynomial time algorithm for nearoptimal reinforcement learning. The Journal of Machine Learning Research, 3:213–231.
 GheshlaghiAzar et al. (2013) GheshlaghiAzar, M., Lazaric, A., and Brunskill, E. (2013). Regret bounds for reinforcement learning with policy advice. In Proceedings of the European Conference on Machine Learning (ECML’13).
 Hamilton et al. (2014) Hamilton, W., Fard, M. M., and Pineau, J. (2014). Efficient learning and planning with compressed predictive states. The Journal of Machine Learning Research, 15(1):3395–3439.
 Jaksch et al. (2010) Jaksch, T., Ortner, R., and Auer, P. (2010). Nearoptimal regret bounds for reinforcement learning. J. Mach. Learn. Res., 11:1563–1600.
 Kearns and Singh (2002) Kearns, M. and Singh, S. (2002). Nearoptimal reinforcement learning in polynomial time. Machine Learning, 49(23):209–232.
 Krishnamurthy et al. (2016) Krishnamurthy, A., Agarwal, A., and Langford, J. (2016). Contextualmdps for pacreinforcement learning with rich observations. arXiv preprint arXiv:1602.02722v1.
 LaValle (2006) LaValle, S. M. (2006). Planning algorithms. Cambridge university press.
 Littman et al. (2001) Littman, M. L., Sutton, R. S., and Singh, S. (2001). Predictive representations of state. In In Advances In Neural Information Processing Systems 14, pages 1555–1561. MIT Press.
 Madani (1998) Madani, O. (1998). On the computability of infinitehorizon partially observable markov decision processes. In AAAI98 Fall Symposium on Planning with POMDPs, Orlando, FL.
 Mnih et al. (2013) Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.

Ng and Jordan (2000)
Ng, A. Y. and Jordan, M. (2000).
Pegasus: A policy search method for large mdps and pomdps.
In
Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence
, UAI’00, pages 406–415, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.  Papadimitriou and Tsitsiklis (1987) Papadimitriou, C. and Tsitsiklis, J. N. (1987). The complexity of markov decision processes. Math. Oper. Res., 12(3):441–450.
 Perkins (2002) Perkins, T. J. (2002). Reinforcement learning for POMDPs based on action values and stochastic optimization. In Proceedings of the Eighteenth National Conference on Artificial Intelligence and Fourteenth Conference on Innovative Applications of Artificial Intelligence (AAAI/IAAI 2002), pages 199–204. AAAI Press.
 Poupart and Vlassis (2008) Poupart, P. and Vlassis, N. (2008). Modelbased bayesian reinforcement learning in partially observable domains. In International Symposium on Artificial Intelligence and Mathematics (ISAIM).
 Ross et al. (2007) Ross, S., Chaibdraa, B., and Pineau, J. (2007). Bayesadaptive pomdps. In Advances in neural information processing systems, pages 1225–1232.
 Sondik (1971) Sondik, E. J. (1971). The optimal control of partially observable Markov processes. PhD thesis, Stanford University.
 Song et al. (2013) Song, L., Anandkumar, A., Dai, B., and Xie, B. (2013). Nonparametric estimation of multiview latent variable models. arXiv preprint arXiv:1311.3287.
 Sutton and Barto (1998) Sutton, R. S. and Barto, A. G. (1998). Introduction to reinforcement learning. MIT Press.
 Wang et al. (2015) Wang, Y., Tung, H.Y., Smola, A. J., and Anandkumar, A. (2015). Fast and guaranteed tensor decomposition via sketching. In Advances in Neural Information Processing Systems, pages 991–999.
 Watkins and Dayan (1992) Watkins, C. J. and Dayan, P. (1992). Qlearning. Machine learning, 8(34):279–292.