In most reinforcement learning algorithms the size of the state and action spaces are assumed to be finite. But many real-world applications have continuous state or action spaces. Though the case of continuous state space has been considered before like in Ortner and Ryabko  and Lakshmanan et al.  there is no tractable algorithm whose regret has been analyzed.In this work we consider a simpler finite-horizon setting and give a tractable algorithm with near-optimal regret bound.
There have been many assumptions on the rewards and transition function considered before like deterministic transitions in  or transition functions that are linear in state and action , ,  and . Kakade et al.  have considered more general setting of PAC learning in RL with metric state spaces. Another result is by Osband and Roy  where bounds on expected regret was derived when reward and transition functions belonged to a class of functions characterized by parameters like eluder dimension and Kolmogorov dimension. In this paper we consider the most general assumption that the reward function and the transition functions are Hölder continuous like the papers  and . We derive our bound on the discretization error without any further assumptions. By assuming unbiased sampling, we also derive an upper bound on the regret which is
when both transition probabilities and reward function are Lipschitz continuous.
In recent work, Azar et al.  have proposed a new algorithm Optimistic Value Iteration (UCBVI). This is different from the UCRL algorithm in Jacksh et al  upon which the algorithms of Ortner and Ryabko  was based. For episodic problems, it is known that UCBVI algorithm has better regret bounds. We extend this UCBVI algorithm for continuous state space problems using the state space aggregation used in . It should also be noted that both the algorithms in  and  are not tractable. Though we have considered an easier episodic setting, our algorithm is tractable. And we show that the discretization error(discretizing the state space to intervals) is bounded above by under the assumptions that rewards and transitions are Hölder Continuous.
For infinite horizon setting, in the one-dimensional case with Lipschitz rewards and transition functions the bound by Ortner and Ryabko is of order in . If the transition functions are in addition smooth, a regret bound of is shown in .
The best known lower bound for Lipschitz reward and general transition function is by Ortner and Ryabko  which is order . We improve upon this by giving a lower bound of , matching the upper bound for algorithms that discretizes the state-space. We have also implemented the algorithm and give experimental results on one and two dimensional setting. The empirical performance is seen to match with the theoretical results.
Ii Terminology and Problem Definition
We define Markov Decision Process (MDP) by state space , action space , reward function and transition probability . We consider continuous state space and finite action MDP. For simplicity we derive results for the one dimensional state space, they can be easily generalized to higher dimensional state spaces.The random rewards given state and action are bounded in with mean . The probability of going to state given state and action is given by the transition probability . We make the following assumptions like in , . This guarantees that rewards and transitions are close in close states.
There are such that for any two states and all actions a,
There are such that for any two states and all actions a,
For simplicity we assume are same for both assumptions.
We consider the finite horizon setting where the agent interacts with the environment in number of steps per episode. We denote by the set ). The policy during an episode is expressed as a mapping . Let denote the state in the step of the episode and denote the policy for the episode . The value function of each state in episode from step by following a policy is defined by
The optimal value function is defined as for all and . The performance of the algorithm is measured according to regret incurred in all the episodes as given by
We discretize the continuous state space into intervals of length like in . Let denote interval of the state . We define aggregate rewards and aggregate transition probabilities with respect to as
Here can be interpreted as the mean reward in the interval . The algorithm treats each interval as single state and the aggregate policy is . The aggregated value function following this policy is .At any state and and step ,the mapping between the policies are given by
For the discretised MDP, the transitions are between intervals,so we define as the average of in
In UCRL  based algorithms confidence sets are built around rewards and transition probabilities. But here we build the confidence set around optimal value function of the discretized MDP as in Azar et al (). The algorithm proceeds similar to UCBVI in . The only difference is that here we use aggregated rewards, transitions probabilities and value functions. The bonus (Algorithm 2
) which is used in calculating the Q-Values is built from the empirical variance of the estimated next values. This relies on the Bernstein-Freedman’s concentration inequality for building the concentration sets.
In UCRL based algorithms  we need to find the optimistic MDP and optimal policy for that MDP. This step is not tractable when the state space is continuous as we need knowledge of the bias span (section 3 of ). Though this is not needed for finite-horizon problems, to the best of our knowledge, UCCRL algorithm is not computationally tractable. We note that in our algorithm which is based on UCBVI, this step is not needed. Briefly the algorithm consists of the following parts/sections.
Initialization part which also includes the discretizing the continuous MDP based on the input parameters. This step is executed only once.
The next part consists of an iterative flow of three processes.(The number of iterations is equal to the number of episodes )
Estimating the transition probabilities based on the history till that iterate.
Finding the Q values using a modified bellman operator(which includes bonus) according to the current transition probabilities.This part can be considered as a simple Dynamic Programming(DP) problem for finding Q values.
Execute the current(discrete) policy(according to the Q values found) and record the feedback given by the environment.
This algorithm follows the heuristic principle known asoptimism in the face of uncertainty. The algorithms in this paradigm employ optimism to guide exploration. In simple words, the algorithm assigns higher bonus value to that action(given a state/interval) which it is most uncertain about. This encourages the exploration of that action next time when it visits the same state again. Indirectly as the number of times the action chosen increases the algorithm would become more and more certain about it. Given a (state/interval,action) pair, the bonus in the algorithm depends on the variance of value function of the next possible states. Larger the variance,larger the uncertainty of taking that action.
Let us understand this using a simple example. Consider a discretized MDP with 4 possible actions. Given a particular interval , let the Q values and the corresponding bonus found by the algorithm be equal to those given in the figure 1. In the case of not using bonus, the best action would be because has the highest Q-value in . But when bonus is considered, the action is the most likely to be chosen. According to the bonus, the decreasing order of uncertainties of actions are . The combined effect of Q value and the bonus will lead to the exploration of action . Due to this exploration of action , the uncertainty about choosing reduces.
The regret analysis of UCBVI-CRL is straight forward. The regret of the continuous MDP upto K episodes is defined in equation (4). Now add and subtract terms , to this to get,
Here is the regret of the discretized MDP and is the error which is a consequence of discretizing the state space.
Using the discretization technique mentioned above, is bounded above by .
Let us split the terms in into two parts as
|Using the property|
For any action , is bounded by
The first inequality is obtained by replacing the integral with lower Riemann sum by dividing into n
intervals each of length and being the infimum in the sub-interval.
The second inequality follows since function is continuous w.r.t and infimum is the minimum. The last inequality follows from
the assumption (1) and length of .
For any action , is bounded by
From (9), replacing the integral with lower Riemann sum with n sub intervals and using the property of continuity that infimum of a function is equal to minimum in that closed interval.
Now we have,
Hence can be bound by bounding the difference of the rewards inside the expectation which is similar to bounding . For any action we have,
Now is bounded by
V Additional Result: Upper Bound on Regret
We derived a tighter upper bound on the regret for continuous case(when using the proposed algorithm) when the transition estimates found by the algorithm are unbiased estimates of (9).The below theorem states that.
We set . It can be seen that when , setting first term dominates while the third term dominates when . Thus the optimal regret of is obtained by setting . This is better than the regret bound of order for the same in . And in the Lipschitz case i.e., when we have regret of order .
The regret bound in  is also of order for the infinite horizon problem in the Lipschitz case. But there is an additional parameter which depends on the smoothness of the transition function. And only in the asymptotic case when the regret of is attained.
Vi Lower Bounds
We have the following theorem giving the lower bound.
For any algorithm using state-space discretization, any natural numbers and , the expected regret for after timesteps for the MDP with state , actions defined above is
We note that any algorithm using state-space discretization also incurs a regret by choosing a wrong action due to discretization error. Hence we have
Regret in the discretized MDP
Regret due to the discretization error
Let the algorithm discretize the state into intervals.Let us assume that there are two sub-intevals in each interval and the optimal action in each sub-interval is different. Assume for simplicity that all intervals are of equal size. Otherwise divide each interval into sub-intervals of size half of the least interval size and assume constant reward for the remaining length of the intervals.
The mean rewards for actions are as shown in the figure 2, i.e., they are fixed for a state. It can be seen that it is not possible to determine the optimal action for both these sub-intervals simultaneously. And the regret obtained due to discretization is of order as the the rewards are Lipschitz continuous. Since this regret is incurred on an average for atleast half of the states visited . The total regret is of order . Balancing this and the regret in the discretized MDP by taking we get a regret of order . ∎
We note that algorithms like value iteration and non-stationary value iteration algorithms (Chapter 3 of ) which do not use state-space discretization cannot be used here as the probability transition function is not known.
It is straight forward to show a regret bound of for infinite horizon setting where is the span of the optimal bias function.
We have performed our experiments on one dimensional and two dimensional state spaces. For one-dimensional case we implemented the algorithm on a simple problem with with being the state space . In this problem, the reward functions for the actions are and respectively. So the optimal policy is, for a state in interval , the one which takes action if
is odd and
if is even. The reward was scaled to the range to meet the requirements of . From the current state , after taking an action such that , the next state is sampled uniformly from the state space .
We can see from Figure 6 that the empirical regret is converging faster for larger number of intervals, for the same horizon length The convergence can be better understood by comparing the the points where the empirical regret approaches to . The very first point, in every plot, where the empirical regret is zero is indicated by a dashed vertical line showing the episode number. The results match the intuition that for the same problem having more number of states improves the performance.
By keeping constant and varying it can be seen from the plots that regret approaches zero slower as is increases. It can also be seen that the regret for larger horizon length at a particular episode was higher than the regret at the same episode for smaller horizon length. Thus the regret increases with the horizon length as expected.
Next we conducted experiments on a two-dimensional problem with and a Lipschitz continuous reward function for the respective actions. The reward functions were and for actions and respectively. The analysis of this case is also similar to the one-dimensional setting. The algorithm shows similar trend for different values of for same number of intervals (see figure 5) i.e, regret approaches zero slower for higher values of as in one-dimensional setting.
We considered the finite-horizon continuous reinforcement learning problem. We have given an algorithm based on UCBVI for the same. With the only assumption that the reward function and transition probabilities are Lipscitz continuous we show that the upper bound on the discretization error is . We have shown the matching lower bound under the assumption that the algorithm discretizes the state-space of MDP. In future we would like to show similar bound without this assumption. Also, with an additional assumption that the sampling is unbiased, we proved that the regret is of order , when using our algorithm.
We have also given some experimental results to validate our propositions. In future we would like to extend the algorithm to infinite-horizon case. This seems to be difficult as the UCBVI algorithm needs the horizon length as input. We also want to improve the dependence on size of action , Lipschitz constant and horizon length on the regret.
-  R. Ortner and D. Ryabko, “Online regret bounds for undiscounted continuous reinforcement learning,” in Advances in Neural Information Processing Systems NIPS, vol. 25, pp. 1763–1772, 2012.
K. Lakshmanan, R. Ortner, and D. Ryabko, “Improved regred bounds for
undiscounted continuous reinforcement learning,” in
Proceedings of 32nd International Conference on Machine Learning (ICML), vol. 37, pp. 524–32, 2015.
-  A. Bernstein and N. Shimkin, “Adaptive-resolution reinforcement learning with polynomial exploration in deterministic domains,” Machine Learning, vol. 81, no. 3, pp. 359–397, 2010.
A. L. Strehl and M. L. Littman, “Online linear regression and its application to model-based reinforcement learning,” inAdvances Neural Information Processing Systems NIPS, vol. 20, pp. 1417–1424, 2008.
E. Brunskill, B. R. Leffler, L. Li, M. L. Littman, and N. Roy, “Provably efficient learning with typed parametric models,”Journal of Machine Learning Research, vol. 10, pp. 1955–1988, 2009.
-  Y. Abbasi-Yadkori and C. Szepesvári, “Regret bounds for the adaptive control of linear quadratic systems,” in Learning Theory, 24th Annual Conference on Learning Theory COLT, JMLR Proceedings Track, vol. 19, pp. 1–26, 2011.
-  M. Ibrahmi, A. Javanmard, and B. V. Roy, “Efficient reinforcement learning for high dimensional linear quadratic systems,” in Advances Neural Information Processing Systems NIPS, vol. 25, pp. 2645–2653, 2012.
-  S. Kakade, M. J. Kearns, and J. Langford, “Exploration in metric state spaces,” in Proceedings of 20th International Conference on Machine Learning ICML, pp. 306–312, 2003.
-  I. Osband and B. V. Roy, “Model-based reinforcement learning and the eluder dimension,” vol. Preprint, 2014.
-  M. G. Azar, I. Osband, and R. Munos, “Minimax regret bounds for reinforcement learning,” arXiv preprint arXiv:1703.05449, 2017.
-  T. Jaksch, R. Ortner, and P. Auer, “Near-optimal regret bounds for reinforcement learning,” Journal of Machine Learning Research, vol. 11, pp. 1563–1600, 2010.
-  O. Hernández-Lerma and J. B. Lasserre, Discrete-time Markov control processes: basic optimality criteria, vol. 30. Springer Science & Business Media, 2012.
-  O. Hernández-Lerma, Adaptive Markov control processes, vol. 79 of Applied Mathematical Sciences. Springer, 1989.