Log In Sign Up

Self-Organizing Maps as a Storage and Transfer Mechanism in Reinforcement Learning

The idea of reusing information from previously learned tasks (source tasks) for the learning of new tasks (target tasks) has the potential to significantly improve the sample efficiency reinforcement learning agents. In this work, we describe an approach to concisely store and represent learned task knowledge, and reuse it by allowing it to guide the exploration of an agent while it learns new tasks. In order to do so, we use a measure of similarity that is defined directly in the space of parameterized representations of the value functions. This similarity measure is also used as a basis for a variant of the growing self-organizing map algorithm, which is simultaneously used to enable the storage of previously acquired task knowledge in an adaptive and scalable manner.We empirically validate our approach in a simulated navigation environment and discuss possible extensions to this approach along with potential applications where it could be particularly useful.


Self-Organizing Maps for Storage and Transfer of Knowledge in Reinforcement Learning

The idea of reusing or transferring information from previously learned ...

Structural Similarity for Improved Transfer in Reinforcement Learning

Transfer learning is an increasingly common approach for developing perf...

Analyzing Visual Representations in Embodied Navigation Tasks

Recent advances in deep reinforcement learning require a large amount of...

Adaptive Procedural Task Generation for Hard-Exploration Problems

We introduce Adaptive Procedural Task Generation (APT-Gen), an approach ...

Uniform State Abstraction For Reinforcement Learning

Potential Based Reward Shaping combined with a potential function based ...

Learning Transferable Domain Priors for Safe Exploration in Reinforcement Learning

Prior access to domain knowledge could significantly improve the perform...

Multitask Adaptation by Retrospective Exploration with Learned World Models

Model-based reinforcement learning (MBRL) allows solving complex tasks i...

1. Introduction

The use of off-policy algorithms Geist et al. (2014) in reinforcement learning (RL) Sutton and Barto (2011) has enabled the learning of multiple tasks in parallel. This is particularly useful for agents operating in the real-world, where a number of tasks are likely to be encountered, and may be required to be learned Sutton et al. (2011); White et al. (2012); Karimpanal and Wilhelm (2017). Ideally, as an agent learns more and more tasks through its interactions with the environment, it should be able to efficiently store and extract meaningful information, which could be useful for accelerating its learning on new, possibly related tasks. This area of research, which aims at addressing the issue of effectively reusing previously accumulated knowledge is referred to as transfer learning Taylor and Stone (2009).

Formally, transfer learning is an approach to improve learning performance on a new ‘target’ task , using accumulated knowledge from a set of ‘source’ tasks, . Here, each task is a Markov Decision Process (MDP) Puterman (1994), such that , where is the state space, is the action space, is the transition function, and is the reward function. In this work, we address the relatively simple case where tasks vary only in the reward function , while and remain fixed across the tasks. For knowledge transfer to be effective, source tasks need to be selected appropriately. Reusing knowledge from an inappropriately selected source task could lead to negative transfer Lazaric (2012); Taylor and Stone (2009), which is detrimental to the learning of the target task. In order to avoid such problems and ensure beneficial knowledge transfer, a number of MDP similarity metrics Ferns et al. (2004); Carroll and Seppi (2005) have been proposed. However, it has been shown that the utility of a particular MDP similarity metric depends on the type of transfer mechanism used  Carroll and Seppi (2005). In addition, these transfer mechanisms are generally not designed to handle situations involving a large number of source tasks. This could be limiting for both embodied as well as virtual agents operating in the real-world. For such an agent, the value functions pertaining to hundreds or thousands of tasks may be learned over a period of time. Some of these tasks may be very similar to each other, which could result in considerable redundancy in the stored value function information. From a continual learning perspective, a suitable mechanism may be needed to enable the storage of such information in a scalable manner. In the approach described here, the knowledge of a task is assumed to be contained in the value function (

-function) associated with it. We assume that these value functions are represented using parameter weights, which are learned from the agent’s interactions with its environment. We define a cosine similarity metric within this value function weight (parameter) space, and use this as a basis for maintaining a scalable knowledge base, while simultaneously using it to perform knowledge transfer across tasks.

The proposed mechanism enables the storage of value function weight vectors using a variant of the growing self organizing map (GSOM)

Alahakoon et al. (2000). The inputs to this GSOM algorithm consist of the value function weights of new tasks, along with any representative value function weights extracted from previously learned tasks. The resulting map would ideally correspond to value function weights representative of previously acquired task knowledge, topologically arranged in accordance with their relation to each other. As the agent interacts with its environment and learns the value function weights corresponding to new tasks, this new information is incorporated into the SOM, which evolves by growing to a suitable size in order to sufficiently represent all of the agent’s gathered knowledge. Each element/node of the resulting map is a variant of the input value function weights (knowledge of previously learned tasks). These variants are treated as solutions to arbitrary source tasks, each of which is related to some degree to one of the previously learned tasks. The aim of storing knowledge in this manner is not to retain the exact value function information corresponding to all the previously learned tasks, but to maintain a compressed and scalable knowledge base that can approximate the value function weights of the previously learned tasks.

While learning a new target task, this knowledge base is used to identify the most relevant source task, based on the same similarity metric. The value function associated with this task is then greedily exploited to provide the agent with action advice to guide it towards achieving the target task. Due to random initialization, the agent’s initial estimates of the value function weights corresponding to the target task is poor. However, as it gathers more experience through its interactions with the environment, these estimates improve, which consequently improves its estimates of the similarities between the target and source tasks. As a result, the agent becomes more likely to receive relevant action advice from a closely related source task. This action advice can be adopted, for instance, on an

-greedy basis, essentially substituting the agent’s exploration strategy. In this way, the knowledge of source tasks is used to merely guide the agent’s exploratory behavior, thereby minimizing the risk of negative transfer which could have otherwise occurred especially if value functions or representations were directly transferred between the tasks.

Apart from maintaining an adaptive knowledge base of value function weights related to previously learned tasks, the proposed approach aims to leverage this knowledge base to make informed exploration decisions, which could lead to faster learning of target tasks. This could be especially useful in real-world scenarios where factors such as learning speed and sample efficiency are critical, and where several new tasks may need to be learned continuously, as and when they are encountered. The overall structure of the proposed methodology is depicted in Figure 1.

Figure 1. The overall structure of the proposed approach

2. Related Work

The sample efficiency of RL algorithms is one of the most critical aspects that determines the feasibility of its deployment in real-world applications. Transfer learning is one of the mechanisms through which this can be addressed. Consequently, numerous techniques have been proposed Lazaric (2012); Taylor and Stone (2009); Zhan and Taylor (2015) to efficiently reuse the knowledge of learned tasks. A number of these Carroll and Seppi (2005); Ammar et al. (2014); Song et al. (2016) rely on a measure of similarity between MDPs in order to choose an appropriate source task to transfer from. However, this can be problematic, as no such universal metric exists Carroll and Seppi (2005), and some of the useful ones may be computationally expensive Ammar et al. (2014). Here, the similarity metric used is computationally inexpensive, and the degree of similarity between two tasks is based solely on the value function weights associated with them. Also, in the approach described here, once an appropriate source task is identified, its value functions are used solely to extract action advice, which is used to guide the exploration of the agent. Similar approaches to transfer learning using action advice exist Torrey and Taylor (2013); Zhan and Taylor (2015); Zimmer et al. (2014), where a teacher-student framework for RL is adopted. The transfer mechanism described here is similar in principle, but is inherently tied to the SOM-based approach for maintaining the knowledge of learned tasks.

Other clustering approaches Thrun and O’Sullivan (1998); Liu et al. (2012); Carroll and Seppi (2005) have also been applied to achieve transfer learning in RL. In one of the earliest notable approaches to transfer learning, Thrun et al. Thrun and O’Sullivan (1998) described a methodology for transfer learning by clustering learning tasks using a nearest neighbor clustering approach. Although similar approaches can be used, the SOM-based approach described here preserves the topological properties of the input space, due to which similar behaviors are placed closer to one another. This could give us a rough idea of the type of behavior to be expected, given some new, arbitrary value function weights.

Perhaps the most closely related work is the ‘Actor-mimic’ Parisotto et al. (2015) approach, which also performs transfer using action advice. In this approach, useful behaviors of a set of expert networks are compressed into a single multi-task network, which is then used to provide action advice in an greedy manner. The authors also report the problem of dramatically varying ranges of the value function across different tasks, which is resolved by using a Boltzmann distribution function. In the present work, the use of the cosine similarity metric resolves this issue and ensures that the similarity measure between tasks is bounded.

In the context of continual learning Ring (1994), Ring et al. Ring et al. (2011) described a modular approach to assimilate the knowledge of complex tasks using a training process that closely resembles SOM. In this approach, a complex task is decomposed into a number of simple modules, such that modules close to each other correspond to similar agent behaviors. Teng et al. Teng et al. (2015) also proposed a SOM-based approach to integrate domain knowledge and RL, with the aim of developing agents that can continuously expand their knowledge in real time. These ideas of knowledge assimilation are also reflected in the present work. However, our approach also aims to reuse this knowledge to aid the learning of other related tasks.

3. Methodology

In this work, we present an approach that enables the reuse of knowledge from previously learned tasks to aid the learning of a new task. Our approach consists of two fundamental mechanisms: (a) the accumulation of learned value function weights into a knowledge base in a scalable manner, and (b) the use of this knowledge base to guide the agent during the learning of the target task. The basis for these mechanisms is centered around the task similarity metric we propose here. We consider two tasks to be similar based on the cosine similarity between their corresponding learned value function weight vectors. For instance, the cosine similarity between two non-zero weight vectors and is given by:


The key idea is that two tasks are more likely to be similar to each other if they have similar feature weightings. Using such a similarity metric has certain advantages, such as boundedness and the ability to handle weight vectors with largely different magnitudes. That is, even in the case of highly similar or dissimilar tasks, the cosine similarity remains in the range [-1,1]. During the construction of the scalable knowledge base, the mentioned similarity metric is used as a basis for training the self-organizing map. Once this map is constructed, the cosine similarity is again used as a basis for selecting an appropriate source task weight vector to guide the exploratory behavior of the agent. We now describe these mechanisms in detail.

3.1. Knowledge Storage Using Self-Organizing Map

A self-organizing map (SOM) Kohonen (1998)

is a type of unsupervised neural network used to produce a low-dimensional representation of its high-dimensional training samples. Typically, a SOM is represented as a two- or three-dimensional grid of nodes. Each node of the SOM is initialized to be a randomly generated weight vector of the same dimensions as the input vector. During the SOM training, an input is presented to the network, and the node that is most similar to this input is selected to be the ‘winner’. The winning node is then updated towards the input vector under consideration. Other nodes in the neighborhood are also influenced in a similar manner, but as a function of their topological distances to the winner. The final layout of a SOM is such that adjacent nodes have a greater degree of similarity to each other in comparison to nodes that are far apart. In this way, the SOM extracts the latent structure of the input space.

For our purposes, the knowledge of an RL task is assumed to be contained in its parameterized representation of the value function (function), obtained using linear function approximation Sutton and Barto (2011). A naïve approach to storing knowledge associated with multiple tasks is to explicitly store their value function parameters/weights. Apart from the scalability issue associated with such an approach, a high degree of redundancy in the learned knowledge may arise if several of these tasks are very similar or nearly identical to each other. A more generalized approach to knowledge storage would be to store the characteristic features of the weight vectors associated with the learned tasks. The ability of the SOM to extract these features in an unsupervised manner makes it an attractive choice for the knowledge storage mechanism proposed here.

In our approach, the inputs to the SOM are learned value function weights of previously learned tasks (input tasks). The hypothesis is that after training, the weight vectors associated with each node in the SOM have varying degrees of similarity to the input vectors, and hence, correspond to value function weights of tasks which may be related to the input tasks to varying degrees. Hence, each node in the SOM could be assumed to contain the value function information corresponding to a source task, and the weight vector associated with an appropriately selected SOM node could serve as source value function weights which could be used to guide the exploration of the agent while learning a target task.

In a continual learning scenario, a number of tasks with largely varying degrees of similarity (as per the similarity metric defined in Equation  (1)) with each other may be encountered. A SOM containing only a few number of nodes may not be able to represent the knowledge of these tasks to a sufficient level of accuracy. Hence, the size of the SOM may need to adapt dynamically as and when new task knowledge is learned. We address this problem by allowing the number of nodes in the SOM to change, using a mechanism similar to that used in the GSOM algorithm. For a SOM containing nodes, each node is associated with an error such that for a particular input vector , if node (with a corresponding weight vector ) is the winner, the error is updated as:


The term in Equation  (2) is proportional to the Euclidean distance between the normalized versions of input vectors and . Hence, the error update equation (Equation  (2)) is equivalent to that used in Alahakoon et al. (2000). Once all the input vectors are presented to the SOM, the total error, of the network is computed as . This total error is computed for each iteration of the SOM. In subsequent iterations, if the increase in the total error exceeds a certain threshold , new nodes are spawned at the boundaries of the SOM. The weight vectors of these nodes are initialized to the mean of their neighbors, and are subsequently modified by the SOM training process. Such a mechanism enables the SOM to grow in size and representation capacity, thereby allowing for a low network error to be achieved.

The nature of the described SOM algorithm is such that all the input vectors are needed during the training. However, for applications such as robotics, where the agent may have limited on-board memory, this may not be feasible. Thousands of tasks may be encountered during its lifetime, and the value function weights of all these tasks would need to be explicitly stored in order to train the SOM. Ideally, we would like the knowledge contained in the SOM to adapt in an online manner, to include relevant information from new tasks as and when they are learned. We achieve this online adaptation by making modifications to the training mechanism of the GSOM algorithm. Specifically, when a new task is learned, we update the SOM by presenting the newly learned weights, together with the weight vectors associated with the nodes of the SOM as inputs to the GSOM algorithm. The resulting SOM is then utilized for transfer. In summary, the weights of the SOM are recycled as inputs while updating the knowledge base using the GSOM algorithm. This can be observed in the overall structure of the proposed approach, shown in Figure 1. The implicit assumption here is that the weights associated with the SOM nodes sufficiently represent the knowledge of the previously learned tasks. This approach of updating the SOM knowledge base allows new knowledge to be adaptively incorporated into the SOM, while obviating the need to explicitly store the value function weights of all previously learned tasks. The overall storage mechanism is summarized in Algorithm 1.

2: A set of value function weight vectors corresponding to learned tasks. These are the input vectors to the GSOM algorithm.
3: Initial number of nodes in the SOM
4: Initial value of neighborhood function
5: Time constant to control the neighborhood function
6: Initial value of SOM learning rate
7: Time constant to control the learning rate
8: Initial weight vectors associated with the nodes in the SOM
9: Error vector, initialized to be zero vector of length
10: Initial value of average error
11: Growth threshold parameter
12: Number of SOM iterations
13:for  do
14:   Randomly pick an input vector from
15:   Select winning node based on highest cosine similarity to input vector
18:   for  do
19:      Compute topological distance between nodes and
22:   end for
25:   if  then
26:      Spawn new nodes at the boundaries of the SOM
27:      Expand the error vector, with the values of new nodes initialized to the mean of the previous error vector.
28:      Update as per the number of new nodes added
29:   end if
30:end for
Algorithm 1 Knowledge storage using self-organizing maps

3.2. The Transfer Mechanism

Once the knowledge of previously learned tasks has been assimilated into a SOM, it is reused to aid the learning of a target task. The weight vector associated with each SOM node is treated as the value function weight vector corresponding to an arbitrary source task. Among these source value function weight vectors, the one that is most similar () to the target value function weight vector is chosen for transfer. That is,

In order to actually perform the transfer, the selected source task weights may be directly used to modify the value function weights of the target task. In order to evaluate the described knowledge storage and reuse mechanisms, we allow the agent to explore and learn multiple tasks in the simulated environment shown in Figure 2. The environment is continuous, and the agent is assumed to be able to sense its horizontal and vertical coordinates, which constitute its state. The states are represented in the form of a feature vector containing elements for each state dimension. While navigating through the environment, the agent is allowed to choose from a set of different actions: moving forwards, backwards, sideways, diagonally upwards or downwards to either side, or staying in place. The velocities associated with these movements is set to be 6 units/s, and new actions are executed every 200 ms.

As the agent executes actions in its environment, it autonomously identifies tasks using an adaptive clustering approach similar to that described in Karimpanal et al. Karimpanal and Wilhelm (2017). The clustering is performed on an additional feature vector (environment feature vector) which contains elements describing the presence or absence of specific environment features. For instance, these features could represent the presence or absence of a source of light, sound or other signals from the environment that the agent is capable of sensing. In the simulations described here, the environment feature vector contains elements corresponding to arbitrary environment stimuli distributed at different locations in the environment. As the agent interacts with its environment, clustering is performed on in an adaptive manner, which helps identify unique configurations of which may be of interest to the agent. During the agent’s interactions with the environment, the mean of each discovered cluster is treated as the environment feature vector associated with the goal state of a distinct navigation task. In our simulations, the agent eventually discovers such tasks, the corresponding goal locations of which are indicated by the colored regions in Figure  2. The value function corresponding to each of these tasks is learned using the algorithm Sutton and Barto (2011). For learning, the reward structure is such that the agent obtains a reward () when it is in the goal state, a penalty () for bumping into an obstacle, and a living penalty () for every other non-goal state. In each episode, the agent starts from a random state and executes actions in the environment till it reaches the associated navigation target region (goal state), at which point, a positive reward is obtained, and the episode terminates. For each learning task, the full feature vector (where ) is used, and the learning rate is set to be , the discount factor is and the trace decay parameter is set to be

. The other hyperparameters described in Algorithm

1 are set to the following values: , , , , and .

Figure 2. The simulated continuous environment with the navigation goal states of different tasks (numbered from tasks to ), indicated by the different colored circles.

Once a new navigation task is identified, and its value function weight vector is learned, we incorporate this new knowledge into the SOM knowledge base. To do this, the value function weight vector associated with this task, along with the weight vectors associated with the SOM are presented as input vectors to Algorithm 1. For instance, if the weight vectors of the SOM are given by , then the subsequent input vectors to Algorithm 1 are . By presenting the inputs to the GSOM algorithm in this manner, the resulting SOM approximates and integrates previously learned task knowledge and the knowledge of newly learned tasks.

Figure 3. A visual depiction of an SOM resulting from the simulations. The color of each node is derived from the most similar task in Figure 2. The intensity of the color is in proportion to the value of this similarity metric (indicated over each SOM node).

Figure 3 shows a sample SOM, which was learned by the agent after learning episodes. The color of each SOM node in Figure 3 corresponds to the task in Figure 2 that has the maximum cosine similarity between its value function weights and the weight vector associated with the SOM node. Further, the brightness of this color is in proportion to the value of this cosine similarity. In Figure 3, these values are overlaid and displayed on top of each node. The different colors and associated cosine similarity values of each SOM node in Figure 3 suggests that the SOM stores knowledge of a variety of related tasks in a structured manner.

It is also seen from Figure 3 that the nodes corresponding to the knowledge of tasks that are most closely related to tasks and are clustered together, and those related to tasks and are distinct, and are stored in separate clusters. In addition, the allocation of the SOM nodes occurs as per the differences in the tasks themselves. For example, in the SOM shown in Figure 3, nodes are allocated to tasks that are most similar to task , nodes to tasks related to task , and nodes to tasks related to tasks and combined. This demonstrates that the allocation of nodes is done as per the characteristics of the tasks, and not merely according to the number of tasks. When a number of similar tasks are learned, simply storing their value function weights would result in significant redundancies. Such redundancies are avoided by the SOM-based approach described here.

Figure 4. A sample plot of the nature of the learning improvements brought about by SOM-based exploration (, ). The solid lines represent the mean of the average return for learning runs of

episodes each, whereas the shaded region marks the standard deviation associated with this data.

Although the SOM does not necessarily retain the exact value functions of previously learned tasks, it can be used to guide the exploration of an agent while learning a new task. This is especially true if the new task is closely related to one of the previously learned tasks. Figure 4 depicts this phenomenon for task (, ), with higher returns being achieved at a significantly faster rate using the SOM-based exploration strategy in Section 3.2

. In both exploration strategies, exploratory actions are executed with the same probability, but SOM-based exploration achieves a better performance, as knowledge of related tasks (in this case, tasks

and ) from previous experiences allows the agent to take more informed exploratory actions.

Figure 5. Cosine similarity between a target task and its most similar source task as the agent interacts with its environment

This is also supported by Figure 5, which shows the evolution of the cosine similarity between the value function weights of the target task and the most similar weight vector in the SOM as the agent interacts with its environment. With a greater number of agent-environment interactions, the estimates of the agent’s target task weight vector improves, and it receives more relevant advice from the SOM. This trend is probably responsible for the learning improvements seen in Figure 4.

Figure 6. Comparison of the average returns accumulated for different tasks using the SOM-based and greedy exploration strategies 

Figure 6 shows the average return per episode for different tasks and different values of , using the two exploration strategies. The values plotted are averaged over runs. The return is computed after each episode by allowing the agent to greedily exploit the value functions starting from randomly chosen points in the environment for steps. As observed in Figure 6, SOM-based exploration consistently results in higher average returns for related tasks and . Its performance on the unrelated tasks and are generally comparable to that of the greedy approach. Although task is related to tasks and , it is the first task learned, and hence, does not benefit from the use of previous knowledge. Hence, the transfer advantage is not observed for task .

In addition to the improvements described, the SOM-based approach to conducting knowledge transfer also offers advantages in terms of the scalability of knowledge storage. This is depicted in Figure 7, which shows the number of nodes needed for storing the knowledge of up to tasks, with different values of the GSOM threshold parameter . It is clear that as the number of learned tasks increases, the number of nodes required per task decreases, making the SOM-based knowledge storage approach more viable.

Figure 7. The number of SOM nodes used to store knowledge for up to tasks, for different values of growth threshold

The simulations demonstrate that using a SOM knowledge base to guide the agent’s exploratory actions help achieve faster learning when the target tasks are related to the previously learned tasks. Moreover, the nature of the transfer algorithm is such that even in the case where the source tasks are unrelated to the target task, the learning performance does not exhibit drastic drops, unlike the case where value functions of source tasks are directly used to initialize or modify the value function of a target task. Another advantage of the approach proposed here is that it can be easily applied to different representation schemes (tabular representations, neural networks etc.,), as long as the same action space and representation is used for the target and source tasks. In addition, with regards to the storage of knowledge of learned tasks, we demonstrated that the SOM-based approach offers a scalable alternative to explicitly storing the value function weights of all the learned tasks.

Despite these advantages, several issues remain to be addressed. The most fundamental limitation of this approach is that it is applicable only to situations where tasks differ solely in their reward functions. This may prohibit its use in many practical applications. Moreover, the approach as described executes any action advice that it is provided with. The decision to execute the advised actions could be carried out in a more selective manner, perhaps based on the cosine similarity between the target task and the advising node of the SOM. Apart from this, and the several other possible variants to this approach, ways to automate the selection of the threshold parameters, establishing theoretical bounds on the learning performance and approaches to quantify the efficiency of the knowledge storage mechanism may be future directions for research.

4. Conclusions

We described an approach to efficiently store and reuse the knowledge of learned tasks using self organizing maps. We applied this approach to an agent in a simulated multi-task navigation environment, and compared its performance to that of an greedy approach for different values of the exploration parameter . Results from the simulations reveal that a modified exploration strategy that exploits the knowledge of previously learned tasks improves the agent’s learning performance on related target tasks. Overall, our results indicate that the approach proposed here transfers knowledge across tasks relatively safely, while simultaneously storing relevant task knowledge in a scalable manner. Such an approach could prove to be useful for agents that operate using the reinforcement learning framework, especially for real-world applications such as autonomous robots, where scalable knowledge storage and sample efficiency are critical factors.


This work is partially supported by a President’s Graduate Fellowship (T.G.K., Ministry of Education, Singapore)


  • (1)
  • Alahakoon et al. (2000) Damminda Alahakoon, Saman K Halgamuge, and Bala Srinivasan. 2000. Dynamic self-organizing maps with controlled growth for knowledge discovery. IEEE Transactions on neural networks 11, 3 (2000), 601–614.
  • Ammar et al. (2014) Haitham Bou Ammar, Eric Eaton, Matthew E Taylor, Decebal Constantin Mocanu, Kurt Driessens, Gerhard Weiss, and Karl Tuyls. 2014. An automated measure of mdp similarity for transfer in reinforcement learning. In

    Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence

  • Carroll and Seppi (2005) James L Carroll and Kevin Seppi. 2005. Task similarity measures for transfer in reinforcement learning task libraries. In Neural Networks, 2005. IJCNN’05. Proceedings. 2005 IEEE International Joint Conference on, Vol. 2. IEEE, 803–808.
  • Ferns et al. (2004) Norm Ferns, Prakash Panangaden, and Doina Precup. 2004. Metrics for finite Markov decision processes. In Proceedings of the 20th conference on Uncertainty in artificial intelligence. AUAI Press, 162–169.
  • Geist et al. (2014) Matthieu Geist, Bruno Scherrer, et al. 2014. Off-policy learning with eligibility traces: a survey.

    Journal of Machine Learning Research

    15, 1 (2014), 289–333.
  • Karimpanal and Wilhelm (2017) Thommen George Karimpanal and Erik Wilhelm. 2017. Identification and off-policy learning of multiple objectives using adaptive clustering. Neurocomputing 263 (2017), 39 – 47. Multiobjective Reinforcement Learning: Theory and Applications.
  • Kohonen (1998) Teuvo Kohonen. 1998. The self-organizing map. Neurocomputing 21, 1 (1998), 1–6.
  • Lazaric (2012) Alessandro Lazaric. 2012. Transfer in Reinforcement Learning: A Framework and a Survey. Springer Berlin Heidelberg, Berlin, Heidelberg, 143–173.
  • Liu et al. (2012) Miao Liu, Girish Chowdhary, Jonathan P How, and L Carrin. 2012. Transfer learning for reinforcement learning with dependent Dirichlet process and Gaussian process. NIPS, Lake Tahoe, NV, December (2012).
  • Parisotto et al. (2015) Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. 2015. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342 (2015).
  • Puterman (1994) Martin L. Puterman. 1994. Markov Decision Processes: Discrete Stochastic Dynamic Programming (1st ed.). John Wiley & Sons, Inc., New York, NY, USA.
  • Ring et al. (2011) Mark Ring, Tom Schaul, and Juergen Schmidhuber. 2011. The two-dimensional organization of behavior. In Development and Learning (ICDL), 2011 IEEE International Conference on, Vol. 2. IEEE, 1–8.
  • Ring (1994) Mark Bishop Ring. 1994. Continual learning in reinforcement environments. Ph.D. Dissertation. University of Texas at Austin Austin, Texas 78712.
  • Song et al. (2016) Jinhua Song, Yang Gao, Hao Wang, and Bo An. 2016. Measuring the distance between finite markov decision processes. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 468–476.
  • Sutton and Barto (2011) Richard S Sutton and Andrew G Barto. 2011. Reinforcement learning: An introduction. (2011).
  • Sutton et al. (2011) Richard S Sutton, Joseph Modayil, Michael Delp, Thomas Degris, Patrick M Pilarski, Adam White, and Doina Precup. 2011. Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2. International Foundation for Autonomous Agents and Multiagent Systems, 761–768.
  • Taylor and Stone (2009) Matthew E Taylor and Peter Stone. 2009. Transfer learning for reinforcement learning domains: A survey. Journal of Machine Learning Research 10, Jul (2009), 1633–1685.
  • Teng et al. (2015) Teck-Hou Teng, Ah-Hwee Tan, and Jacek M Zurada. 2015. Self-organizing neural networks integrating domain knowledge and reinforcement learning. IEEE transactions on neural networks and learning systems 26, 5 (2015), 889–902.
  • Thrun and O’Sullivan (1998) Sebastian Thrun and Joseph O’Sullivan. 1998. Clustering learning tasks and the selective cross-task transfer of knowledge. In Learning to learn. Springer, 235–257.
  • Torrey and Taylor (2013) Lisa Torrey and Matthew Taylor. 2013. Teaching on a budget: Agents advising agents in reinforcement learning. In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems. International Foundation for Autonomous Agents and Multiagent Systems, 1053–1060.
  • White et al. (2012) Adam White, Joseph Modayil, and Richard S Sutton. 2012. Scaling life-long off-policy learning. In Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on. IEEE, 1–6.
  • Zhan and Taylor (2015) Yusen Zhan and Matthew E Taylor. 2015. Online transfer learning in reinforcement learning domains. arXiv preprint arXiv:1507.00436 (2015).
  • Zimmer et al. (2014) Matthieu Zimmer, Paolo Viappiani, and Paul Weng. 2014. Teacher-student framework: a reinforcement learning approach. In AAMAS Workshop Autonomous Robots and Multirobot Systems.