An Optimal Online Method of Selecting Source Policies for Reinforcement Learning

by   Siyuan Li, et al.

Transfer learning significantly accelerates the reinforcement learning process by exploiting relevant knowledge from previous experiences. The problem of optimally selecting source policies during the learning process is of great importance yet challenging. There has been little theoretical analysis of this problem. In this paper, we develop an optimal online method to select source policies for reinforcement learning. This method formulates online source policy selection as a multi-armed bandit problem and augments Q-learning with policy reuse. We provide theoretical guarantees of the optimal selection process and convergence to the optimal policy. In addition, we conduct experiments on a grid-based robot navigation domain to demonstrate its efficiency and robustness by comparing to the state-of-the-art transfer learning method.



There are no comments yet.


page 4

page 6


Context-Aware Policy Reuse

Transfer learning can greatly speed up reinforcement learning for a new ...

Efficient Deep Reinforcement Learning through Policy Transfer

Transfer Learning (TL) has shown great potential to accelerate Reinforce...

Cross-Domain Transfer in Reinforcement Learning using Target Apprentice

In this paper, we present a new approach to Transfer Learning (TL) in Re...

Experience Selection Using Dynamics Similarity for Efficient Multi-Source Transfer Learning Between Robots

In the robotics literature, different knowledge transfer approaches have...

Online Observer-Based Inverse Reinforcement Learning

In this paper, a novel approach to the output-feedback inverse reinforce...

A Reinforcement Learning Approach for the Multichannel Rendezvous Problem

In this paper, we consider the multichannel rendezvous problem in cognit...

Lifetime policy reuse and the importance of task capacity

A long-standing challenge in artificial intelligence is lifelong learnin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.


Reinforcement learning (RL) [Sutton and Barto1998] is a widely-used framework to learn an optimal control or decision-making policy. However, RL has a high sample complexity, since a RL agent gains data via repeated interactions with its environment. Transferring past knowledge to a target task can greatly accelerate reinforcement learning. The first step of transfer in RL is to select useful knowledge during the reuse process. If an irrelevant source task is chosen, learning performance could be worse than learning from scratch, which is called negative transfer [Pan and Yang2010]. This problem is challenging, because in practical situation, the environment is mostly unknown and an agent has no prior knowledge which source task is useful. So the agent has to do online source task selection.

Transfer learning has been recognized as an important direction in RL for a long time [Taylor and Stone2009]. Some works leverage source task knowledge without automatically identifying related source tasks [Parisotto, Ba, and Salakhutdinov2015, Barreto et al.2016, Gupta et al.2017]. However, these methods may suffer from negative transfer. Others require humans to define relationships between tasks and relevant source tasks [Torrey et al.2005, Taylor, Stone, and Liu2007, Ammar and Taylor2011, Ammar et al.2015]. Under a more general circumstance, an agent has to do source task selection by itself. But selecting an appropriate source task usually demands quite a little extra knowledge concerning the domain. For example, [Perkins, Precup, and others1999, Lazaric, Restelli, and Bonarini2008, Nguyen, Silander, and Leong2012] need some prior experience in the target task. In [Ammar et al.2014, Song et al.2016]

, a well-estimated or known model is assumed, which is not always available in practice. Policy Reuse Q-learning (PRQL)

[Fernández and Veloso2013] requires no prior knowledge of target task or MDP environment, but it may converge to a suboptimal policy. To address above limitations, we propose an optimal method to select and reuse source policies online automatically without extra prior knowledge.

Our contributions in this paper can be claimed as follows. First, we formulate source policy selection problem as a Multi-armed Bandit (MAB) problem where different source policies are regarded as bandits and enable optimal online source policy selection. Second, we augment Q-learning with policy reuse, while maintaining the same theoretical guarantee of convergence as traditional Q-learning. Finally, our empirical experiments conducted on a grid-based robot navigation domain verify that our approach (i) accomplishes the optimal source policy selection process; (ii) transfers useful knowledge to a target task and significantly speeds up learning process; and (iii) achieves virtually empirical performance to traditional Q-learning in equivalent situations where no source knowledge is useful.

In the remainder of this paper, we start by reviewing related work. Then, background knowledge on RL and problem formulation is described. After that, we present our approach and theoretical results followed by empirical results comparing our approach with the state-of-the-art algorithm. Finally, we conclude and outline directions for future work.

Related Work

As transfer in RL has received much attention recently, we now discuss in greater detail the relationship between our algorithm and others. [Talvitie and Singh2007] treated previously learned policies as experts and mediated these experts intelligently. Since the mixing time of experts is known in episodic domains, this method is not as effective as standard algorithms. In contrast, our approach works fine for episodic tasks. PRQL [Fernández and Veloso2013] selected source tasks from a library probabilistically using a softmax method. However, because it stops doing exploration soon after greedy policy’s reward exceeds the reuse reward, it does not guarantee convergence to the optimal policy. [Sinapov et al.2015] learned transferability between a source-target task pair using meta-data. This method has a low efficiency, for it is expensive to generate a large set of data using every source-target pair. On the contrary, our approach adapts an MAB method to select source policies and accomplishes the optimal selection process. [Rosman, Hawasly, and Ramamoorthy2016] proposed a Bayesian method to do policy reuse in a policy library. But it mainly solves the problem of short-lived sequential policy selection, therefore this method does not learn a full policy. [Rusu et al.2016]

leveraged prior knowledge via lateral connections to previously learned features in neural networks. Although they have showed positive transfer even in orthogonal or adversarial tasks, there is no theoretical foundation of their algorithm.

Some related works focus on multi-task learning (MTL), which is very similar to transfer learning. MTL assumes that all MDPs are drawn from the same distribution and learning is parallel on several tasks [Ramakrishnan, Zhang, and Shah2017]. In contrast, we make no assumption regarding the distribution over MDPs and concentrate on transfer learning problem. In one previous MTL work, [Wilson et al.2007] represented the distribution of MDPs with a hierarchical Bayesian model. The continuously updated distribution served as a prior for rapid learning in new environments. But as Wilson et al mentioned in their work, their algorithm is not computationally efficient. In more recent MTL works, [Brunskill and Li2013] proposed a technique that involves two phases of learning to reduce sample complexity of RL. [Fachantidis et al.2015] determined the most similar source tasks based on compliance, which can be interpreted as a sort of distance metric between tasks.

Preliminaries and Problem Formulation

This section briefly reviews RL background and describes problem formulation. RL is a dominant framework to solve control and decision-making problems via mapping situations to actions. The learning environment of RL is an MDP defined as a tuple of , where denotes a discrete state space. At a time step , an agent in a specific state performs an action in a discrete action space . Based on transition function , the agent switches to next state and gets a reward according to reward function . An agent begins to interact with its environment from a start state sampled from an initial belief and keeps taking actions until it reaches a final state or an absorbing state. A policy directs the agent which action to take, given a particular state. The agent’s goal is to learn an optimal policy that maximizes the expected value of cumulative reward after training. is a discount factor to reduce the impact of future rewards on learning policy. Q-learning is a model-free RL method, which is able to find an optimal policy for any finite MDP. A Q-learning agent learns the expected utility for each action in every given state by doing value iteration update in each step as below:

where is a learning rate.

Given a source policy library , where denotes the optimal policy for source task in one domain, our goal is to reuse source policies in library optimally and solve a target task . We assume that tasks are episodic with maximum number of steps in each episode and take average reward of episodes as a metric of a learning algorithm:

where is the reward of time step in episode . The convergence speed and value of indicate the learning performance. Our approach applies transfer learning to Q-learning, so it is an off-policy learning method. Since the exploration strategy will affect average reward during learning greatly, we do evaluation following a fully greedy strategy after each learning episode and obtain a learning curve of average reward.


The rewards of reusing source policies are stochastic in RL. Therefore, there is a dilemma between exploiting the policy which yields high current rewards and exploring the policy which may produce more future rewards. We adapt an MAB method for this problem. Our approach accomplishes online source policy selection via evaluating the utility of each source policy during learning a target task. The exploration process is guided by the intelligently selected policy in Q-learning, which is an off-policy learner. In the situation where no source knowledge is useful, strategy in our approach will play a major role to maintain the same learning performance as traditional Q-learning. In this section, we firstly present an optimal online source policy selection method. After that, we introduces how to reuse source knowledge through Q-learning. Finally, we provide theoretical optimality analysis for our algorithm.

Figure 1: Flow chart of our algorithm

Figure 1 provides the overview of our algorithm. Firstly, we execute

exploration strategy with a probability of

, and select a source policy from a library using an MAB method with a probability of , where decreases over time. That is to say, at the beginning of learning, we exploit source knowledge mostly. As learning goes, past knowledge becomes less useful, so we exploit strategy to go on learning. In order to exploit a past policy, our algorithm combines the past policy with the random policy. Once the policy of current episode is determined, our algorithm will execute the policy and update Q-function. Therefore, how to optimally select a source policy and to reuse the selected policy are key to transfer learning.

Source Policy Selection

An agent has no prior knowledge which source policy is useful for current target task before trying. It has to decide which source policy to reuse in the next episode based on previous rewards to obtain a larger cumulative reward. Different source policies can be regarded as bandits with stochastic rewards in MAB. Source policy selection and MAB are both the exploitation versus exploration tradeoff in essence.

The pseudo code of our source policy selection (-selection) method is shown in Algorithm 1. In order to solve target task , this method chooses policy with a probability of and chooses a policy from source policy library using UCB1 with a probability of (Line:3-5). When a past policy is chosen, we execute combining method of Algorithm 2 on the chosen past policy in episode (Line:6-7). We need to keep policy average gain and number of selected times in the previous episodes () for UCB1 (Line:11-16). controls the reuse degree and Algorithm 1 is executed for episodes.

UCB is a simple and efficient algorithm that achieves optimal logarithmic regret of MAB [Lai and Robbins1985]. MAB defines a collection of independent bandits with different expected reward . An agent sequentially selects bandits to make cumulative regret minimum. The regret is difference between expected reward of the selected bandit and maximum reward expectation . UCB1 of UCB family algorithms maintains number of steps where machine has been selected in steps and empirical expected reward . Each machine in the collection is played once initially and UCB1 selects machine as follows in every time step afterwards:

3:for  to  do
4:     Choose a policy :
5:      With a probability of ,
6:      With a probability of :
7:     if  then
8:         -reuse
9:     else
10:         Execute policy
11:     end if
12:     if  then
13:         ,
14:         ;
15:     else
16:         ;
17:     end if
18:end for
Algorithm 1 -selection ().

Source Policy Reuse

To take full advantage of the selected policy, the random policy is indispensable for interacting with unexplored states. Without random actions, past policies will lead an agent to their original goals rather than the goal of target task. Exploiting the useful source policy can be regarded as directed exploration. Therefore, we combine source policy with random policy probabilistically in policy reuse strategy demonstrated by Algorithm 2. At time step , we take action based on with a probability , and take a random action with a probability of (Line:3-4).

1:Set initial state randomly.
2:for  to  do
3:     With a probability of
4:     With a probability of
5:     Receive next state and reward
6:     Update :
7:     Set
8:end for
10:return and
Algorithm 2 ().

Algorithm 2 shares some similar ideas with PRQL. They both take an action probabilistically in one episode. However, we mix no greedy action in reuse episodes to maintain a fixed expected value of reuse reward. In addition, each source policy is uncorrelated so stochastic assumption of MAB is satisfied. We choose strategy with a probability of outside reuse episodes instead. strategy is very crucial when there is no useful source policy in the library. As decreases over time, our algorithm will reuse source policy less and converge to strategy.

Theoretical Analysis

We present two theoretical results that provide the foundation of our approach below.

Theorem 1.

Given a source policy library , if UCB1 selects source policy according to the reward of each episode in Algorithm 2, the expected regret is logarithmically bounded.


Because there is no greedy action in each reuse episode, all the source policies are not correlated. In addition, for each policy , its reward in every episode is an independent sample from the same distribution with a fixed expectation. So the stochastic assumption of MAB is satisfied. UCB can achieve the logarithmic regret bound asymptotically which is proved minimum by Lai and Robbins in their classical paper [Auer, Cesa-Bianchi, and Fischer2002], so it is an optimal allocation strategy when there is no preliminary knowledge about the reward distribution. As a result, this method of selecting source policies from a library is theoretically optimal. ∎

Theorem 2.

of Algorithm 1 will converge to the optimal Q-function for target task as traditional Q-learning.


Since controlling the exploration rate decreases with time, Algorithm 1 will execute policy more and more frequently. of policy is less than 1, so Algorithm 1 will keep doing exploration for infinite episodes. Q-learning with a proper learning rate converges to the optimal Q-function for any finite MDP [Melo2001]. The probability of executing random actions in Algorithm 1 will never equal to 0, so all state-action pairs will be visited infinitely often. As a result, Algorithm 1 will converge to the optimal Q-function as traditional Q-learning. ∎

Both our learning method and selection method are optimal. Thus, our approach is an optimal online strategy to select source policies theoretically.

Empirical Results

To demonstrate that our algorithm is empirically sound and robust, we carry out experiments in a grid-based robot navigation domain with multiple rooms and compare the results with the state-of-the-art algorithm, PRQL [Fernández and Veloso2013].

Experimental Settings

Our navigation domain has been used by PRQL paper. Some recent works of transfer learning also conduct experiments in a similar grid-world domain [Lehnert, Tellex, and Littman2017, Laroche and Barlier2017]. The map of our navigation domain is composed of ( in our case) states which denote free positions, goal area and wall. Each state is plotted as a cell. An agent’s position is represented by a two-dimensional coordinates continuously. Afterwards, we take integer part of

to determine discrete state of an agent. The agent in this problem can take four actions, respectively up, down, left and right. Arbitrary action makes agent’s position move in the corresponding direction with a step size of 1. To make this problem more practical, we design a stochastic MDP by adding a uniformly distributed random variant within

to and respectively after an action. When an agent reaches a wall state, the wall will keep the agent in the state before taking actions. After an agent reaches the goal area, it will obtain a reward of 1 and then this episode ends. Arriving at the other states except the goal state generates no reward. An agent has no high-level view of the map and only observes its current state.

Figure 2: Target tasks and source task library in the map

In Figure 2, and represent the goals of target task and . Numbers in Figure 2 denote goals of a source task library . Source policies in the library are optimal to their respective tasks. Obviously, is the most similar to , because their goals are in the same room, and the other tasks are dissimilar to . This problem has a large number of states, and the initial belief is an uniform distribution. Therefore, learning from scratch is rather slow and transfer learning can significantly accelerate the learning process.

In this experiment, for Q-function update. is set as 4000, which is enough for our approach to learn a policy with high reward. To avoid an agent entering a dead loop, is set as 100. Because goals of different tasks are different,an agent takes actions according to the past policy with a larger probability at the beginning of an episode. Afterwards, it takes more random actions. So is set as . In addition, is set to , which decreases over time. So our approach converges to at last. To make parameters simple, of policy is set as 0.1 (with probability 0.9 follows the greedy strategy, and with probability 0.1 acts randomly). We conduct these experiments with PRQL and Q-learning to do comparison. Q-learning utilizes with the same parameter as of our approach. The parameters of PRQL are consistent with those in [Fernández and Veloso2013].

UCB1-tuned tunes upper confidence bound according to variance of bandit rewards as follows:



is upper bound on the variance of a Bernoulli random variable). UCB1-tuned outperforms UCB1 in a multitude of experiments. Although it has not been proved theoretically optimal, another algorithm UCB-V which also considers the variance of bandit rewards has already been proved optimal in theory

[Audibert, Munos, and Szepesvári2009]. As the variance and expectation of reward in this experiment are much smaller than , we set to 0.0049 for Algorithm 1. A lager will lead to a higher exploration rate, so it is more suitable to the circumstance with a larger variance.

In next section, we compare the empirical performance between our algorithm and PRQL especially.

Experimental Results

In order to manifest that our approach achieves the optimal source policy selection, we firstly show the learning curve by doing evaluation after each episode and the frequency of selecting source policies. Next, we compare the expected reward among PRQL, our approach and traditional Q-learning. Then, we set reward functions of target tasks randomly and conduct the above experiment again to demonstrate robustness of our approach. Afterwards, we conduct an experiment to indicate that our method is equally applicable to a circumstance where no similar task exists in source task library. Finally, we present a scenario when PRQL does not converge to the greedy policy. However, in the same case, our approach still converges to strategy.

Figure 3: Performance comparison of PRQL, our algorithm and traditional Q-learning on target task

Figure 3 shows the learning curve of PRQL, our approach and traditional Q-learning when solving target task

. We compare the average reward generated by following a fully greedy policy after each episode, since these three learning methods are all off-policy. Each learning process in Figure 3 has been executed 10 times. The average value is shown and error bars represent standard deviations.

In Figure 3, axis represents the number of episodes and axis represents average evaluation reward. From Figure 3, we can have three observations. First, our algorithm uses less time to reach a threshold of average reward than PRQL. Average reward of our approach is greater than 0.3 in only 500 episodes and more than 0.35 in the end. Second, the asymptotic performance of our approach is better than PRQL. Although the gap is not overwhelmingly big, it is comparable to the convergence value of cumulative reward, since reward in this experiment is sparse and not exceedingly large. Third, reward of Q-learning increases much more slowly than our approach, so knowledge transfer in our approach is intensely efficient and no negative transfer occurs. In addition, standard deviations of our approach are the smallest among the three curves, which demonstrates that our approach has an extremely stable performance in the 10 executions.

(a) our approach
(b) PRQL
Figure 4: Frequency of selecting each source policy

Since our approach selects source policies deterministically, we cannot compare the probability of selecting each source policy. Therefore, we show the frequency of selecting source policies by our approach and PRQL in Figure 4.

We can see that our approach almost does not select irrelevant tasks any more after 500 episodes. In contrast, frequency of choosing dissimilar tasks by PRQL is still around 0.1 in 500 episodes. It costs a shorter time for our method to detect that source task is proper to transfer from. The curve of drops slowly in our method because we keep using the past policy to explore the environment and our reuse method is different from PRQL. To satisfy the stochastic assumption of MAB, we split greedy actions from the reuse process. As frequency is different from probability, we select policy with a high probability when is large.

Since the initial belief in this experiment is an uniform distribution, the expected reward can be denoted as . Figure 5 shows the expected reward of our approach, PRQL and Q-learning.

Figure 5: Expected reward of PRQL, our approach and Q-learning

In Figure 5, the curve of our approach rises fastest, which demonstrates that our agent reaches the goal more times using the same time. So we obtain more rewards during learning and these rewards “back up” to other states. The expected reward converges slowly, for only reaching the goal state generates a reward. Therefore, we have not shown the convergent part, so the curves seem polynomial.

To demonstrate robustness of our algorithm, we randomly select goals of target tasks, guaranteed that these goals are in the same room as one of source tasks in Figure 2. So there is a similar source task to transfer from. In next experiment, we discuss the situation where no source knowledge is useful. We choose 9 different goals for this experiment, which are shown in Figure 6 with numbers.

Figure 6: Randomly selected target tasks
Figure 7: Average reward of our algorithm, PRQL and Q-learning to solve randomly selected target tasks

Figure 7 shows average evaluation reward of our algorithm, PRQL and Q-learning when solving tasks denoted in Figure 6. The reward of our algorithm has a faster increment than PRQL in all the cases. Moreover, convergence value of reward in our algorithm is sometimes a little larger. PRQL converges to the greedy policy and does not explore any more, soon after the reward of greedy policy exceeds all source policies. Thus, it may converge to a suboptimal policy in the end. However, our approach keeps doing exploration, so every state-action pair is visited infinitely often. As a result, our algorithm surely converges to the optimal policy.

To indicate that our method can be applied to a circumstance where there is no similar task in source task library, we conduct the above experiment again to solve target task in Figure 2 using the same source task library. As we can see, the goal of is in a totally different room compared to the goals of source tasks. So all the source policies in the library are useless for . The performance comparison of PRQL, our approach and Q-learning is shown in Figure 8.

Figure 8: Performance comparison of PRQL, our algorithm and traditional Q-learning on target task

As shown in Figure 8, during the first 1000 episodes, the three curves have a similar growth trend, for all the algorithms do exploration in their own way at the beginning. Afterwards, the curve of PRQL starts to flatten, because it has converged to the greedy policy and no random actions is taken. In contrast, our approach has almost the same performance as Q-learning and their curves keep rising, since these two methods go on doing exploration with strategy. strategy of our approach plays a paramount role during learning.

We set position marked with in Figure 6 as the goal of task to present a case when PRQL does not converge to the greedy policy. We show the probability of executing each policy by PRQL to solve in Figure 9. The goal of is just on the way to the goal of , so these two tasks are especially similar.

Figure 9: Probability of selecting each policy by PRQL to solve

PRQL ends up reusing the most similar source task to solve rather than the greedy policy in 2 of the 10 times. When there is a source task especially similar to the target task, the reward of reusing the most similar policy may exceed the reward of the greedy policy, so PRQL converges to the most relevant source task. In contrast, our algorithm controls the exploration rate by tuning . We choose strategy with a probability of . As decreases over time, our algorithm will invariably converges to policy no matter how similar the target task is to source tasks.

Summary and Future Directions

This work focuses on transfer learning in RL. In this paper, we develop an optimal online method of selecting source policies. Our method formulates online source policy selection as an MAB problem. In contrast to previous works, this work provides firm theoretical ground to achieve the optimal source policy selection process. In addition, we augment Q-learning with policy reuse and maintain the same theoretical guarantee of convergence as tradional Q-learning. Furthermore, we present empirical validation that our algorithm outperforms the state-of-the-art transfer learning method and promotes transfer successfully in practice.

These promising results suggest several interesting directions for future research. One of them is to combine inter-mapping between source tasks and target tasks with policy reuse. So we are able to deal with more general circumstance of different state and action spaces. Second, we intend to formulate source task selection problem in an MDP setting and select source tasks based on the current state, thus the moment of transfer is determined automatically. Finally, it’s of importance to extend the proposed algorithm to deep RL and test it in benchmark problems.


  • [Ammar and Taylor2011] Ammar, H. B., and Taylor, M. E. 2011. Reinforcement learning transfer via common subspaces. In International Workshop on Adaptive and Learning Agents, 21–36. Springer.
  • [Ammar et al.2014] Ammar, H. B.; Eaton, E.; Taylor, M. E.; Mocanu, D. C.; Driessens, K.; Weiss, G.; and Tuyls, K. 2014. An automated measure of mdp similarity for transfer in reinforcement learning.
  • [Ammar et al.2015] Ammar, H. B.; Eaton, E.; Ruvolo, P.; and Taylor, M. E. 2015. Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment. In Proc. of AAAI.
  • [Audibert, Munos, and Szepesvári2009] Audibert, J.-Y.; Munos, R.; and Szepesvári, C. 2009. Exploration–exploitation tradeoff using variance estimates in multi-armed bandits. Theoretical Computer Science 410(19):1876–1902.
  • [Auer, Cesa-Bianchi, and Fischer2002] Auer, P.; Cesa-Bianchi, N.; and Fischer, P. 2002. Finite-time analysis of the multiarmed bandit problem. Machine learning 47(2-3):235–256.
  • [Barreto et al.2016] Barreto, A.; Munos, R.; Schaul, T.; and Silver, D. 2016. Successor features for transfer in reinforcement learning. arXiv preprint arXiv:1606.05312.
  • [Brunskill and Li2013] Brunskill, E., and Li, L. 2013. Sample complexity of multi-task reinforcement learning. arXiv preprint arXiv:1309.6821.
  • [Fachantidis et al.2015] Fachantidis, A.; Partalas, I.; Taylor, M. E.; and Vlahavas, I. 2015. Transfer learning with probabilistic mapping selection. Adaptive Behavior 23(1):3–19.
  • [Fernández and Veloso2013] Fernández, F., and Veloso, M. 2013. Learning domain structure through probabilistic policy reuse in reinforcement learning.

    Progress in Artificial Intelligence

  • [Gupta et al.2017] Gupta, A.; Devin, C.; Liu, Y.; Abbeel, P.; and Levine, S. 2017. Learning invariant feature spaces to transfer skills with reinforcement learning. arXiv preprint arXiv:1703.02949.
  • [Lai and Robbins1985] Lai, T. L., and Robbins, H. 1985. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6(1):4–22.
  • [Laroche and Barlier2017] Laroche, R., and Barlier, M. 2017. Transfer reinforcement learning with shared dynamics. In AAAI, 2147–2153.
  • [Lazaric, Restelli, and Bonarini2008] Lazaric, A.; Restelli, M.; and Bonarini, A. 2008. Transfer of samples in batch reinforcement learning. In Proceedings of the 25th international conference on Machine learning, 544–551. ACM.
  • [Lehnert, Tellex, and Littman2017] Lehnert, L.; Tellex, S.; and Littman, M. L. 2017. Advantages and limitations of using successor features for transfer in reinforcement learning. arXiv preprint arXiv:1708.00102.
  • [Melo2001] Melo, F. S. 2001. Convergence of q-learning: A simple proof. Institute Of Systems and Robotics, Tech. Rep 1–4.
  • [Nguyen, Silander, and Leong2012] Nguyen, T.; Silander, T.; and Leong, T. Y. 2012. Transferring expectations in model-based reinforcement learning. In Advances in Neural Information Processing Systems, 2555–2563.
  • [Pan and Yang2010] Pan, S. J., and Yang, Q. 2010. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22(10):1345–1359.
  • [Parisotto, Ba, and Salakhutdinov2015] Parisotto, E.; Ba, J. L.; and Salakhutdinov, R. 2015. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342.
  • [Perkins, Precup, and others1999] Perkins, T. J.; Precup, D.; et al. 1999. Using options for knowledge transfer in reinforcement learning. University of Massachusetts, Amherst, MA, USA, Tech. Rep.
  • [Ramakrishnan, Zhang, and Shah2017] Ramakrishnan, R.; Zhang, C.; and Shah, J. 2017. Perturbation training for human-robot teams. Journal of Artificial Intelligence Research 59:495–541.
  • [Rosman, Hawasly, and Ramamoorthy2016] Rosman, B.; Hawasly, M.; and Ramamoorthy, S. 2016. Bayesian policy reuse. Machine Learning 104(1):99–127.
  • [Rusu et al.2016] Rusu, A. A.; Rabinowitz, N. C.; Desjardins, G.; Soyer, H.; Kirkpatrick, J.; Kavukcuoglu, K.; Pascanu, R.; and Hadsell, R. 2016. Progressive neural networks. arXiv preprint arXiv:1606.04671.
  • [Sinapov et al.2015] Sinapov, J.; Narvekar, S.; Leonetti, M.; and Stone, P. 2015. Learning inter-task transferability in the absence of target task samples. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, 725–733. International Foundation for Autonomous Agents and Multiagent Systems.
  • [Song et al.2016] Song, J.; Gao, Y.; Wang, H.; and An, B. 2016.

    Measuring the distance between finite markov decision processes.

    In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 468–476. International Foundation for Autonomous Agents and Multiagent Systems.
  • [Sutton and Barto1998] Sutton, R. S., and Barto, A. G. 1998. Reinforcement learning: An introduction, volume 1. MIT press Cambridge.
  • [Talvitie and Singh2007] Talvitie, E., and Singh, S. P. 2007. An experts algorithm for transfer learning. In IJCAI, 1065–1070.
  • [Taylor and Stone2009] Taylor, M. E., and Stone, P. 2009. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research 10(Jul):1633–1685.
  • [Taylor, Stone, and Liu2007] Taylor, M. E.; Stone, P.; and Liu, Y. 2007. Transfer learning via inter-task mappings for temporal difference learning. Journal of Machine Learning Research 8(Sep):2125–2167.
  • [Torrey et al.2005] Torrey, L.; Walker, T.; Shavlik, J.; and Maclin, R. 2005. Using advice to transfer knowledge acquired in one reinforcement learning task to another. In ECML, 412–424. Springer.
  • [Wilson et al.2007] Wilson, A.; Fern, A.; Ray, S.; and Tadepalli, P. 2007. Multi-task reinforcement learning: a hierarchical bayesian approach. In Proceedings of the 24th international conference on Machine learning, 1015–1022. ACM.