In reinforcement learning an agent seeks to learn a high-reward policy for selecting actions in a stochastic world without prior knowledge of the world dynamics model and/or reward function. In this paper we consider when the agent is provided with an input set of potential policies, and the agent’s objective is to perform as close as possible to the (unknown) best policy in the set. This scenario could arise when the general domain involves a finite set of types of RL tasks (such as different user models), each with known best policies, and the agent is now in one of the task types but doesn’t know which one. Note that this situation could occur both in discrete state and action spaces, and in continuous state and/or action spaces: a robot may be traversing one of a finite set of different terrain types, but its sensors don’t allow it to identify the terrain type prior to acting. Another example is when the agent is provided with a set of domain expert defined policies, such as stock market trading strategies. Since the agent has no prior information about which policy might perform best in its current environment, this remains a challenging RL problem.
Prior research has considered the related case when an agent is provided with a fixed set of input (transition and reward) models, and the current domain is an (initially unknown) member of this set [5, 4, 2]. This actually provides the agent with more information than the scenario we consider (given a model we can extract a policy, but the reverse is not generally true), but more significantly, we find substantial theoretical and computational advantages from taking a model-free approach. Our work is also closely related to the idea of policy reuse , where an agent tries to leverage prior policies it found for past tasks to improve performance on a new task; however, despite encouraging empirical performance, this work does not provide any formal guarantees. Most similar to our work is Talvitie and Singh’s  AtEase algorithm which also learns to select among an input set of policies; however, in addition to algorithmic differences, we provide a much more rigorous theoretical analysis that holds for a more general setting.
We contribute a reinforcement learning with policy advice (RLPA) algorithm. RLPA is a model-free algorithm that, given an input set of policies, takes an optimism-under-uncertainty approach of adaptively selecting the policy that may have the highest reward for the current task. We prove the regret of our algorithm relative to the (unknown) best in the set policy scales with the square root of the time horizon, linearly with the size of the provided policy set, and is independent of the size of the state and action space. The computational complexity of our algorithm is also independent of the number of states and actions. This suggests our approach may have significant benefits in large domains over alternative approaches that typically scale with the size of the state and action space, and our preliminary simulation experiments provide empirical support of this impact.
A Markov decision process (MDP)is defined as a tuple where is the set of states, is the set of actions, is the transition kernel mapping each state-action pair to a distribution over states, and is the stochastic reward function mapping state-action pairs to a distribution over rewards bounded in the interval.111The extension to larger bounded regions is trivial and just introduces an additional multiplier to the resulting regret bounds. A policy is a mapping from states to actions. Two states and communicate with each other under policy
if the probability of transitioning betweenand under is greater than zero. A state is recurrent under policy if the probability of reentering state under is 1. A recurrent class is a set of recurrent states that all communicate with each other and no other states. Finally, a Markov process is unichain if its transition matrix consists of a single recurrent class with (possibly) some transient states [12, Chap. 8].
We define the performance of in a state as its expected average reward
where is the number of time steps and the expectation is taken over the stochastic transitions and rewards. If induces a unichain Markov process on , then is constant over all the states , and we can define the bias function such that
Its corresponding span is . The bias can be seen as the total difference between the reward of state and average reward.
In reinforcement learning  an agent does not know the transition and/or reward model in advance. Its goal is typically to find a policy that maximizes its obtained reward. In this paper, we consider reinforcement learning in an MDP where the learning algorithm is provided with an input set of deterministic policies . Such an input set of policies could arise in multiple situations, including: the policies may represent near-optimal policies for a set of MDPs which may be related to the current MDP ; the policies may be the result of different approximation schemes (i.e., approximate policy iteration with different approximation spaces); or they may be provided by advisors. Our objective is to perform almost as well as the best policy in the input set on the new MDP (with unknown and/or ).
Our results require the following mild assumption:
There exists a policy , which induces a unichain Markov process on the MDP , such that the average reward for any state and any policy . We also assume that , where is a finite constant.222One can easily prove that the upper bound always exists for any unichain Markov reward process (see [12, Chap. 8]).
This assumption trivially holds when the optimal policy is in the set . Also, in those cases that all the policies in induce some unichain Markov processes the existence of is guaranteed.333Note that Assumption 1 in general is a weaker assumption than assuming MDP is ergodic or unichain,
which would require that the induced Markov chains under
is ergodic or unichain, which would require that the induced Markov chains underall policies be recurrent or unichain, respectively: we only require that the best policy in the input set must induce a unichain Markov process.
A popular measure of the performance of a reinforcement learning algorithm over steps is its regret relative to executing the optimal policy in . We evaluate the regret relative to the best policy in the input set ,
where and . We notice that this definition of regret differs from the standard definition of regret by an (approximation) error due to the possible sub-optimality of the policies in relative to the optimal policy for MDP . Further discussion on this definition is provided in Sec. 8.
In this section we introduce the Reinforcement Learning with Policy Advice (RLPA) algorithm (Alg. 1). Intuitively, the algorithm seeks to identify and use the policy in the input set that yields the highest average reward on the current MDP . As the average reward of each on ,
, is initially unknown, the algorithm proceeds by estimating these quantities by executing the differenton the current MDP. More concretely, RLPA executes a series of trials, and within each trial is a series of episodes. Within each trial the algorithm selects the policies in with the objective of effectively balancing between the exploration of all the policies in and the exploitation of the most promising ones. Our procedure for doing this falls within the popular class of “optimism in face uncertainty” methods. To do this, at the start of each episode, we define an upper bound on the possible average reward of each policy (Line 8): this average reward is computed as a combination of the average reward observed so far for this policy , the number of time steps this policy has been executed and , which represents a guess of the span of the best policy, . We then select the policy with the maximum upper bound (Line 9) to run for this episode. Unlike in multi-armed bandit settings where a selected arm is pulled for only one step, here the MDP policy is run for up to steps, i.e., until its total number of execution steps is at most doubled. If
then the confidence bounds computed (Line 8) are valid confidence intervals for the true best policy; however, they may fail to hold for any other policy whose span . Therefore, we can cut off execution of an episode when these confidence bounds fail to hold (the condition specified on Line 12), since the policy may not be an optimal one for the current MDP, if .444See Sec. 4.1 for further discussion on the necessity of the condition on Line 12. In this case, we can eliminate the current policy from the set of policies considered in this trial (see Line 20). After an episode terminates, the parameters of the current policy (the number of steps and average reward ) are updated, new upper bounds on the policies are computed, and the next episode proceeds. As the average reward estimates converge, the better policies will be chosen more.
Note that since we do not know in advance, we must estimate it online: otherwise, if is not a valid upper bound for the span (see Assumption 1), a trial might eliminate the best policy , thus incurring a significant regret. We address this by successively doubling the amount of time each trial is run, and defining a that is a function of the current trial length. See Sec. 4.1 for a more detailed discussion on the choice of . This procedure guarantees the algorithm will eventually find an upper bound on the span and perform trials with very small regret in high probability. Finally, RLPA is an anytime algorithm since it does not need to know the time horizon in advance.
4 Regret Analysis
In this section we derive a regret analysis of RLPA and we compare its performance to existing RL regret minimization algorithms. We first derive preliminary results used in the proofs of the two main theorems.
We begin by proving a general high-probability bound on the difference between average reward and its empirical estimate of a policy (throughout this discussion we mean the average reward of a policy on a new MDP ). Let be the number of episodes has been run, each of them of length (). The empirical average is defined as
where is a random sample of the reward observed by taking the action suggested by and is the total count of samples. Notice that in each episode , the first state does not necessarily correspond to the next state of the last step of the previous episode.
Let , , and be the state-transition kernel under policy (i.e. for finite state and action spaces, is the matrix where the -th entry is ). Then we have
where the second line follows from Eq. 2. Let . Then we have
where we bounded the telescoping sequence
The sequences of random variables
. The sequences of random variablesand , as well as their sums, are martingale difference sequences. Therefore we can apply Azuma’s inequality and obtain the bound
with probability , where in the first inequality we bounded the error terms , each of which is bounded in , and , bounded in . The other side of the inequality follows exactly the same steps. ∎
In the algorithm is not known and at each trial the confidence bounds are built using the guess on the span , where is an increasing function. For the algorithm to perform well, it needs to not discard the best policy (Line 20). The following lemma guarantees that after a certain number of steps, with high probability the policy is not discarded in any trial.
For any trial started after , the probability of policy to be excluded from at anytime is less than .
Let be the first trial such that , which implies that . The corresponding step is at most the sum of the length of all the trials before , i.e., , thus leading to the condition . After the conditions in Lem. 1 (with Assumption 1) are satisfied for . Therefore the confidence intervals hold with probability at least and we have for
where is number of steps when policy has been selected until . Using a similar argument as in the proof of Lem. 1, we can derive
with probability at least . Bringing together these two conditions, and applying the union bound, we have that the condition on Line12 holds with at least probability and thus is never discarded. More precisely Algo. 1 uses slightly larger confidence intervals (notably instead of ), which guarantees that is discarded with at most a probability of . ∎
We also need the -values (line 9) to be valid upper confidence bounds on the average reward of the best policy .
For any trial started after , the -value of is an upper bound on with probability .
Finally, we bound the total number of episodes a policy could be selected.
After steps of Algo. 1, let be the total number of episodes has been selected and the corresponding total number of samples, then
with probability .
Let be the total number of samples at the beginning of episode (i.e., ). In each trial of Algo. 1, an episode is terminated when the number of samples is doubled (i.e., ), or when the consistency condition (last condition on Line12) is violated and the policy is discarded or the trial is terminated (i.e., ). We denote by the total number of episodes truncated before the number of samples is doubled, then . Since the episode is terminated before the number of samples is doubled only when either the trial terminates or the policy is discarded, in each trial this can only happen once per policy. Thus we can bound by the number of trials. A trial can either terminate because its maximum length is reached or when all the polices are discarded (line 6). From Lem. 2, we have that after , is never discarded w.h.p. and a trial only terminates when . Since , it follows that the number of trials is bounded by . So, we have , which implies the statement of the lemma. ∎
Notice that if we plug this result in the statement of Lem. 1, we have that the second term converges to zero faster than the first term which decreases as , thus in principle it could be possible to use alternative episode stopping criteria, such as . But while this would not significantly affect the convergence rate of , it may worsen the global regret performance in Thm. 4.1.
4.1 Gap-Independent Bound
We are now ready to derive the first regret bound for RLPA.
We begin by bounding the regret from executing each policy . We consider the -th episode when policy has been selected (i.e., is the optimistic policy ) and we study its corresponding total regret . We denote by the number of steps of policy at the beginning of episode and the number of steps in episode . Also at time step , let the total number of episodes, and , for each policy be denoted as , and respectively. We also let , , , and be the latest values of these variables at time step for each policy . Let be the event under which is never removed from the set of policies , and where the upper bound of the optimistic policy , , is always as large as the true average reward of the best policy . On the event , can be bounded as
where in (1) we rely on the fact that is only executed when it is the optimistic policy, and is optimistic with respect to according to Lem. 3. (2) immediately follows from the stopping condition at Line 12 and the definition of . (3) follows from the condition on doubling the samples (Line 12) which guarantees .
We now bound the total regret by summing over all the policies.
where in we use Cauchy-Schwarz inequality and (2) follows from , Lem. 4, and .
Since is an unknown time horizon, we need to provide a bound which holds with high probability uniformly over all the possible values of . Thus we need to deal with the case when does not hold. Based on Lem. 1 and by following similar lines to , we can prove that the total regret of the episodes in which the true model is discarded is bounded by with probability at least . Due to space limitations, we omit the details, but we can then prove the final result by combining the regret in both cases (when holds or does not hold) and taking union bound on all possible values of . ∎
A significant advantage of RLPA over generic RL algorithms (such as UCRL2) is that the regret of RLPA is independent of the size of the state and action spaces: in contrast, the regret of UCRL2 scales as . This advantage is obtained by exploiting the prior information that contains good policies, which allows the algorithm to focus on testing their performance to identify the best, instead of building an estimate of the current MDP over the whole state-action space as in UCRL2. It is also informative to compare this result to other methods using some form of prior knowledge. In  the objective is to learn the optimal policy along with a state representation which satisfies the Markov property. The algorithm receives as input a set of possible state representation models and under the assumption that one of them is Markovian, the algorithm is shown to have a sub-linear regret. Nonetheless, the algorithm inherits the regret of UCRL itself and still displays a dependency on states and actions. In  the Parameter Elimination (PEL) algorithm is provided with a set of MDPs. The algorithm is analyzed in the PAC-MDP framework and under the assumption that the true model actually belongs to the set of MDPs, it is shown to have a performance which does not depend on the size of the state-action space and it only has a a dependency on the number of MDPs .555Notice that PAC bounds are always squared w.r.t. regret bounds, thus the original dependency in  becomes when compared to a regret bound. In our setting, although no model is provided and no assumption on the optimality of is made, RLPA achieves the same dependency on .
The span of a policy is known to be a critical parameter determining how well and fast the average reward of a policy can be estimated using samples (see e.g., ). In Thm. 4.1 we show that only the span of the best policy affects the performance of RLPA even when other policies have much larger spans. Although this result may seem surprising (the algorithm estimates the average reward for all the policies), it follows from the use of the third condition on Line12 where an episode is terminated, and a policy is discarded, whenever the empirical estimates are not consistent with the guessed confidence interval. Let us consider the case when but for a policy which is selected as the optimistic policy . Since the confidence intervals built for are not correct (see Lem. 1), could be selected for a long while before selecting a different policy. On the other hand, the condition on the consistency of the observed rewards would discard (with high probability), thus increasing the chances of the best policy (whose confidence intervals are correct) to be selected. We also note that appears as a constant in the regret through and this suggests that the optimal choice of is , which would lead to a bound of order (up to constants and logarithmic terms) .
4.2 Gap-Dependent Bound
Similar to , we can derive an alternative bound for RLPA where the dependency on becomes logarithmic and the gap between the average of the best and second best policies appears. We first need to introduce two assumptions.
Assumption 2 (Average Reward)
Each policy induces on the MDP a single recurrent class with some additional transient states, i.e., for all . This implies that .
Assumption 3 (Minimum Gap)
Define the gap between the average reward of the best policy and the average reward of any other policy as for all . We then assume that for all and , is uniformly bounded from below by a positive constant .
Theorem 4.2 (Gap Dependent Bounds)
(sketch) Unlike for the proof of Thm. 4.1, here we need a more refined control on the number of steps of each policy as a function of the gaps . We first notice that Assumption 2 allows us to define for any state and any policy . We consider the high-probability event (see Lem. 2) where for all the trials run after steps never discard policy . We focus on the episode at time , when an optimistic policy is selected for the -th time, and we denote by the number of steps of before episode and the number of steps during episode . The cumulative reward during episode is obtained as the sum of (the previous cumulative reward) and the sum of rewards received since the beginning of the episode. Let be the event under which is never removed from the set of policies , and where the upper bound of the optimistic policy , , is always as large as the true average reward of the best policy . On event we have
with probability . Inequality is enforced by the episode stopping condition on Line12 and the definition of , is guaranteed by Lem. 3, relies on the definition of gap and Assumption 3, while is a direct application of Lem. 1. Rearranging the terms, and applying Lem. 4, we obtain
By solving the inequality w.r.t. we obtain
w.p. . This implies that on the event , after steps, RLPA acted according to a suboptimal policy for no more than steps. The rest of the proof follows similar steps as in Thm. 4.1 to bound the regret of all the suboptimal policies in high probability. The expected regret of is bounded by and standard arguments similar to  are used to move from high-probability to expectation bounds. ∎
Note that although the bound in Thm. 4.1 is stated in high-probability, it is easy to turn it into a bound in expectation with almost identical dependencies on the main characteristics of the problem and compare it to the bound of Thm. 4.2. The major difference is that the bound in Eq. 5 shows a dependency on instead of . This suggests that whenever there is a big margin between the best policy and the other policies in , the algorithm is able to accordingly reduce the number of times suboptimal policies are selected, thus achieving a better dependency on . On the other hand, the bound also shows that whenever the policies in are very similar, it might take a long time to the algorithm before finding the best policy, although the regret cannot be larger than as shown in Thm. 4.1.
We also note that while Assumption 3 is needed to allow the algorithm to “discard” suboptimal policies with only a logarithmic number of steps, Assumption 2 is more technical and can be relaxed. It is possible to instead only require that each policy has a bounded span, , which is a milder condition than requiring a constant average reward over states (i.e., ).
5 Computational Complexity
As shown in Algo. 1, RLPA runs over multiple trials and episodes where policies are selected and run. The largest computational cost in RLPA is at the start of each episode computing the -values for all the policies currently active in and then selecting the most optimistic one. This is an operation. The total number of episodes can be upper bounded by (see Lem. 4). This means the overall computational of RLPA is of . Note there is no explicit dependence on the size of the state and action space. In contrast, UCRL2 has a similar number of trials, but requires solving extended value iteration to compute the optimistic MDP policy. Extended value iteration requires computation per iteration: if are the number of iterations required to complete extended value iteration, then the resulting cost would be . Therefore UCRL2, like many generic RL approaches, will suffer a computational complexity that scales quadratically with the number of states, in contrast to RLPA, which depends linearly on the number of input policies and is independent of the size of the state and action space.
In this section we provide some preliminary empirical evidence of the benefit of our proposed approach. We compare our approach with two other baselines. As mentioned previously, UCRL2  is a well known algorithm for generic RL problems that enjoys strong theoretical guarantees in terms of high probability regret bounds with the optimal rate of . Unlike our approach, UCRL2 does not make use of any policy advice, and its regret scales with the number of states and actions as . To provide a more fair comparison, we also introduce a natural variant of UCRL2, Upper Confidence with Models (UCWM), which takes as input a set of MDP models which is assumed to contain the actual model . Like UCRL2, UCWM computes confidence intervals over the task’s model parameters, but then selects the optimistic policy among the optimal policies for the subset of models in consistent with the confidence interval. This may result in significantly tighter upper-bound on the optimal value function compared to UCRL2, and may also accelerate the learning process. If the size of possible models shrinks to one, then UCWM will seamlessly transition to following the optimal policy for the identified model. UCWM requires as input a set of MDP models, whereas our RLPA approach requires only input policies.
We consider a square grid world with actions: up (), down (), right () and left () for every state. A good action succeeds with the probability , and goes in one of the other directions with probability 0.05 (unless that would cause it to go into a wall) and a bad action stays in the same place with probability and goes in one of the other directions with probability . We construct four variants of this grid world . In model () good actions are and , in model () good actions are and , in model good actions are and , and in model good actions are and . All other actions in each MDP are bad actions. The reward in all MDPs is the same and is for all states except for the four corners which are: 0.7 (upper left), 0.8 (upper right), 0.9 (lower left) and 0.99 (lower right). UCWM receives as input the MDP models and RLPA receives as input the optimal policies of .
We evaluate the performances of each algorithm in terms of the per-step regret, (see Eq. 3). Each run is steps and we average the performance on runs. The agent is randomly placed at one of the states of the grid at the beginning of each round. We assume that the true MDP model is . Notice that in this case , thus and the regret compares to the optimal average reward. The identity of the true MDP is not known by the agent. For RLPA we set .666See Sec. 4.1 for the rational behind this choice. We construct grid worlds of various sizes and compare the resulting performance of the three algorithms.
Fig. 1 shows per-step regret of the algorithms as the function of the number of states. As predicted by the theoretical bounds, the per-step regret of UCRL2 significantly increases as the number of states increases, whereas the average regret of our RLPA is essentially independent of the state space size777 The RLPA regret bounds depend on the bias of the optimal policy which may be indirectly a function of the structure and size of the domain.. Although UCWM has a lower regret than RLPA for a small number of states, it quickly loses its advantage as the number of states grows. UCRL2’s per-step regret plateaus after a small number of states since it is effectively reaching the maximum possible regret given the available time horizon.
To demonstrate the performance of each approach for a single task, Fig. 2(a) shows how the per-step regret changes with different time horizons for a grid-world with states. RLPA demonstrates a superior regret throughout the run with a decrease that is faster than both UCRL and UCWM. The slight periodic increases in regret of RLPA are when a new trial is started, and all policies are again considered. We also note that the slow rate of decrease for all three algorithms is due to confidence intervals dimensioned according to the theoretical results which are often over-conservative, since they are designed to hold in the worst-case scenarios. Finally, Fig. 2(b) shows the average running time of one trial of the algorithm as a function of the number of states. As expected, RLPA’s running time is independent of the size of the state space, whereas the running time of the other algorithms increases.
Though a simple domain, these empirical results support our earlier analysis, demonstrating RLPA exhibits a regret and computational performance that is essentially independent of the size of the domain state space. This is a significant advantage over UCRL2, as we might expect because RLPA can efficiently leverage input policy advice. Interestingly, we obtain a significant improvement also over the more competitive baseline UCWM.
7 Related Work
The setting we consider relates to the multi-armed bandit literature, where an agent seeks to optimize its reward by uncovering the arm with the best expected reward. More specifically, our setting relates to restless  and rested  bandits, where each arm’s distribution is generated by a an (unknown) Markov chain that either transitions at every step, or only when the arm is pulled, respectively. Unlike either restless or rested bandits, in our case each “arm” is itself a MDP policy, where different actions may be chosen. However, the most significant distinction may be that in our setting there is a independent state that couples the rewards obtained across the policies (the selected action depends on both the policy/arm selected, and the state), in contrast to the rested and restless bandits where the Markov chains of each arm evolve independently.
Prior research has demonstrated a significant improvement in learning in a discrete state and action RL task whose Markov decision process model parameters are constrained to lie in a finite set. In this case, an objective of maximizing the expected sum of rewards can be framed as planning in a finite-state partially observable Markov decision process : if the parameter set is not too large, off-the-shelf POMDP planners can be used to yield significant performance improvements over state-of-the-art RL approaches . Other work  on this setting has proved that the sample complexity of learning to act well scales independently of the size of the state and action space, and linearly with the size of the parameter set. These approaches focus on leveraging information about the model space in the context of Bayesian RL or PAC-style RL, in contrast to our model-free approach that focuses on regret.
There also exists a wealth of literature on learning with expert advice (e.g. 
). The majority of this work lies in supervised learning. Prior work by Diuk et al. leverages a set of experts where each expert predicts a probabilistic concept (such as a state transition) to provide particularly efficient KWIK RL. In contrast, our approach leverages input policies, rather than models. Probabilistic policy reuse  also adaptively selects among a prior set of provided policies, but may also choose to create and follow a new policy. The authors present promising empirical results but no theoretical guarantees are provided. However, we will further discuss this interesting issue more in the future work section.
The most closely related work is by Talvitie and Singh , who also consider identifying the best policy from a set of input provided policies. Talvitie and Singh’s approach is a special case of a more general framework for leveraging experts in sequential decision making environments where the outcomes can depend on the full history of states and actions : however, this more general setting provides bounds in terms of an abstract quantity, whereas Talvitie and Singh provide bounds in terms of the bounds on mixing times of a MDP. There are several similarities between our algorithm and the work of Talvitie and Singh, though in contrast to their approach we take an optimism under uncertainty approach, leveraging confidence bounds over the potential average reward of each policy in the current task. However, the provided bound in their paper is not a regret bound and no precise expression on the bound is stated, rendering it infeasible to do a careful comparison of the theoretical bounds. In contrast, we provide a much more rigorous theoretical analysis, and do so for a more general setting (for example, our results do not require the MDP to be ergodic). Their algorithm also involves several parameters whose values must be correctly set for the bounds to hold, but precise expressions for these parameters were not provided, making it hard to perform an empirical comparison.
8 Future Work and Conclusion
In defining RLPA we preferred to provide a simple algorithm which allowed us to provide a rigorous theoretical analysis. Nonetheless, we expect the current version of the algorithm can be easily improved over multiple dimensions. The immediate possibility is to perform off-policy learning across the policies: whenever a reward information is received for a particular state and action, this could be used to update the average reward estimate for all policies that would have suggested the same action for the given state. As it has been shown in other scenarios, we expect this could improve the empirical performance of RLPA. However, the implications for the theoretical results are less clear. Indeed, updating the estimate of a policy whenever a “compatible” reward is observed would correspond to a significant increase in the number of episodes (see Eq. 4). As a result, the convergence rate of might get worse and could potentially degrade up to the point when does not even converge to the actual average reward . (see Lem. 1 when ). We intend to further investigate this in the future.
Another very interesting direction of future work is to extend RLPA to leverage policy advice when useful, but still maintain generic RL guarantees if the input policy space is a poor fit to the current problem. More concretely, currently if is not the actual optimal policy of the MDP, RLPA suffers an additional linear regret to the optimal policy of order . If is very large and is highly suboptimal, the total regret of RLPA may be worse than UCRL, which always eventually learns the optimal policy. This opens the question whether it is possible to design an algorithm able to take advantage of the small regret-to-best of RLPA when is small and is nearly optimal and the guarantees of UCRL for the regret-to-optimal.
To conclude, we have presented RLPA, a new RL algorithm that leverages an input set of policies. We prove the regret of RLPA relative to the best policy scales sub-linearly with the time horizon, and that both this regret and the computational complexity of RLPA are independent of the size of the state and action space. This suggests that RLPA may offer significant advantages in large domains where some prior good policies are available.
-  Peter L. Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In UAI, pages 35–42, 2009.
-  E. Brunskill. Bayes-optimal reinforcement learning for discrete uncertainty domains. In Abstract. Proceedings of the International Conference on Autonomous Agents and Multiagent System, 2012.
-  Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427–485, 1997.
Carlos Diuk, Lihong Li, and Bethany R. Leffler.
The adaptive k
-meteorologists problem and its application to structure learning and feature selection in reinforcement learning.In ICML, 2009.
-  Kirill Dyagilev, Shie Mannor, and Nahum Shimkin. Efficient reinforcement learning in parameterized models: Discrete parameter case. In European Workshop on Reinforcement Learning, 2008.
-  Fernando Fernández and Manuela M. Veloso. Probabilistic policy reuse in a reinforcement learning agent. In AAMAS, pages 720–727, 2006.
T. Jaksch, R. Ortner, and P. Auer.
Near-optimal regret bounds for reinforcement learning.
Journal of Machine Learning Research, 11:1563–1600, 2010.
-  O. Maillard, P. Nguyen, R. Ortner, and D. Ryabko. Optimal regret bounds for selecting the state representation in re inforcement learning. In ICML, JMLR W&CP 28(1), pages 543–551, Atlanta, USA, 2013.
-  R. Ortner, D. Ryabko, P. Auer, and R. Munos. Regret bounds for restless markov bandits. In ALT, pages 214–228, 2012.
-  P. Poupart, N. Vlassis, J. Hoey, and K. Regan. An analytic solution to discrete bayesian reinforcement learning. In ICML, 2006.
-  D. Pucci de Farias and N. Megiddo. Exploration-exploitation tradeoffs for experts algorithms in reactive environments. In Advances in Neural Information Processing Systems 17, pages 409– 416, 2004.
-  Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc., New York, NY, USA, 1st edition, 1994.
-  R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, Massachusetts, 1998.
E. Talvitie and S. Singh.
An experts algorithm for transfer learning.In IJCAI, 2007.
-  C. Tekin and M. Liu. Online learning of rested and restless bandits. IEEE Transactions on Information Theory, 58(8):5588–5611, 2012.