I Introduction
Autonomous navigation is one of the core problems in mobile robotics. It can roughly be characterized as the ability of a robot to get from its current position to a designated goal location solely based on the input it receives from its onboard sensors. A popular approach to this problem relies on the successful combination of a series of different algorithms for the problems of simultaneous localization and mapping (SLAM), localization in a given map as well as path planning and control, all of which often depend on additional information given to the agent. Although individually the problems of SLAM, localization, path planning and control are well understood [1, 2, 3], and a lot of progress has been made on learning control [4], they have mainly been treated as separable problems within robotics and some often require human assistance during setuptime. For example, the majority of SLAM solutions are implemented as passive procedures relying on special exploration strategies or a human controlling the robot for sensory data acquisition. In addition, they typically require an expert to check as to whether the obtained map is accurate enough for path planning and localization.
Our goal in this paper is to make first steps towards a solution for navigation tasks without explicit localization, mapping and path planning procedures. To achieve this we adopt a reinforcement learning (RL) perspective, building on recent successes of deep RL algorithms for solving challenging control tasks [5, 6, 7, 8, 9]. For such an RL algorithm to be useful for robot navigation we desire that it can quickly adapt to new situations (e.g., changing navigation goals and environments) while still preserving the solutions to earlier problems: a prerequisite that is not fulfilled by current stateoftheart RLbased methods. To achieve this, we employ successor representation learning, which has recently also been combined with deep nets [10, 7]. As we show in this paper, this formulation can be extended to handle sequential task transfer naturally, with minimal additional computational costs; its ability of retaining a compact representation of the Q functions of all encountered tasks enables it to cope with the limited memory and processing capabilities on robotic platforms.

We validate our approach and its fast transfer learning capabilities in both simulated and real world experiments, on both visual and depth inputs, where the agent must navigate different mazelike environments. We compare it to several baselines such as a conventional planner (assuming perfect localization), a supervised imitation learner (assuming perfect localization during training only), and transfer with DQN. In addition, we validate that deep convolutional neural networks (CNNs) can be used to imitate conventional planners in our considered domain.
Ii Relations To Existing Work
Our work is related to an increasing amount of literature on deep reinforcement learning. We here highlight the most apparent connections to recent trends with a focus on value based RL (which we use as a basis). A more detailed review of the concepts we built upon is then given in Sec. III.
As mentioned, a growing amount of success has been reported for valuebased RL in combination with deep neural networks. This idea was arguably popularized by the Deep Qnetworks (DQN) [5] approach followed by a large body of work deriving extended variants (e.g., recent adaptations to continuous control [6, 9] and improvements stabilizing its performance [11, 12, 13]).
While the DQN inspired RL algorithms were shown to be surprisingly effective, they also come with some caveats that complicate transfer to novel tasks (one of the key attributes we are interested in). More precisely, although a neural network trained using Qlearning on a specific task is expected to learn features that are informative about both: i) the dynamics induced by the policy of the agent in a given environment (we refer to this as the policy dynamics in the following text), and ii) the association of rewards to states; these two sources of information cannot be assumed to be clearly separated within the network. As a consequence, while finetuning a learned Qnetwork on a related task might be possible, it is not immediately clear how the aforementioned learned knowledge could be transferred in a way that keeps the policy on the original task intact. One attempt at clearly separating reward attribution for different tasks while learning a shared representation is the idea of learning a general (or universal) value function [14] over many (sub)tasks that has recently also been combined with DQNtype methods [15]. Our method can be interpreted as a special parametrization of a general value function architecture that facilitates fast task transfer.
Task transfer is one of the long standing problems in RL. Historically, most existing work in this direction relied on simple task models and explicitly known relations between tasks or known dynamics [16, 17, 18]. More recently, there have been several attempts at combining task transfer with Deep RL [19, 20, 21, 22, 23, 24]. E.g., Parisotto et al. [19] and Rusu et al. [20] performed multitask learning (transferring useful features between different ATARI games) by finetuning a DQN network (trained on a single ATARI game) on multiple “related” games. More directly related to our work, Rusu et al. [21] developed the Progressive Networks approach which trains an RL agent to progressively solve a set of tasks, allowing it to reuse the feature representation learned on tasks it has already mastered. Their derivation has the advantage that performance on all considered tasks is preserved but requires an ever growing set of learned representations.
In contrast to this, our approach for task transfer aims to more directly tie the learned representations between tasks. To achieve this, we build on the idea of successor representation learning for RL first proposed by Dayan [25], and recently combined with deep neural networks [10, 7]
. This line of work makes the observation that Qlearning can be partitioned into two subtasks: 1) learning features from which the reward can be predicted reliably and 2) estimating how these features evolve over time. While it was previously noted how such a partitioning can be exploited to speed up learning for cases where the reward changes scale or meaning
[10, 7], we here show how this view can be extended to allow general – fast – transfer across tasks, including changes to the environment, the reward function and thus also the optimal policy.We also note that the objective we use for learning descriptive features involves training a deep autoencoder. Learning state representations for RL via autoencoders has previously been considered several times in the literature [26, 27, 28]. Among these, utilizing the priors on learned representation for robotics from Jonschkowski et al. [27] could potentially further improve our model.
Iii Background
In this section we will first review the concepts of reinforcement learning upon which we build our approach.
Iiia Reinforcement learning
We formalize the navigation task as a Markov Decision Process (MDP). In an MDP an agent interacts with the environment through a sequence of observations, actions and reward signals. In each timestep of the decision process the agent first receives an observation from the environment (in our case an image of its surrounding). Together with a history of recent observations – with history length –, informs the agent about the true state of the environment . In the following always define as .The agent then selects an action according to a policy ^{1}^{1}1We restrict the following presentation to deterministic policies with discrete actions to simplify notation. A generalization can easily be obtained. and transits to the next state following the dynamics of the environment: , receiving reward and obtaining a new observation . The agent’s goal is to maximize the cumulative expected future reward (with discount factor ). This quantity uniquely assigns an expected value to each stateaction pair. The actionvalue function (referred to as the Qvalue function) of executing action in state under a policy thus can be defined as:
(1) 
where the expectation is taken over the policy dynamics: the transition dynamics under policy . Importantly, the Qfunction can be computed using the Bellman equation
(2) 
which allows for recursive estimation procedures such as Qlearning and SARSA [29]. Furthermore, assuming the Qfunction for a given policy is known, we can find an improved policy by greedily choosing in each state: .
When combined with powerful function approximators such as deep neural networks these principles form the basis of many recent successes in RL for control.
IiiB Successor feature reinforcement learning
While direct learning of the Qvalue function from Eq. (1) with function approximation is possible, it results in a blackbox approximator which makes knowledge transfer between tasks challenging (we refer to Sec. II for a discussion). We will thus base our algorithm upon a reformulation of the RL problem first introduced by [25] called successor representation learning which has recently also been combined with deep neural networks [10, 7], that we will first review here and then extend to naturally handle task transfer.
To start, we assume that the reward function can be approximately represented as a linear combination of learned features
(in our case features extracted from a neural network) with parameters
and a reward weight vector
as . Using this assumption we can rewrite Eq. (1) as(3) 
where, in line with [10] we refer to as the successor features. Consequently we will refer to the whole reinforcement learning algorithm as successor feature reinforcement learning (SFRL). As a special case we will assume that the features themselves are representative of the state (i.e., we can reconstruct the state from alone) which allows us to explicitly turn into a function . In the following we use the shorthand – omitting the dependency on the parameters – and write to avoid cluttering notation.
Interestingly, these successor features can again be computed via a Bellman equation in which the reward function is replaced with ; that is we have:
(4) 
And we can thus learn approximate successor features using a deep Qlearning like procedure [10, 7]. Effectively, this reformulation separates the learning of the Qfunction into two problems: 1) estimating the expectation of descriptive features under the current policy dynamics and 2) estimating the reward obtainable in a given state.
To show how learning with successor feature RL works let us consider the case where we are only interested in recovering the Qfunction of the optimal policy . In this case we can simultaneously learn the parameters of the feature mapping (a convolutional neural network), the reward weights and an approximate successor features mapping (a fully connected network with parameters
) by alternating stochastic gradient descent steps on two objective functions:
(5) 
(6)  
where and denote collected experience data for transitions and rewards, respectively, – computed by inserting the approximate successor features into Eq. (3) – and where denotes the parameters of the current target successor feature approximation. To provide stable learning these are occasionally copied from (a full discussion of the intricacies of this approach is out of the scope of this paper and we refer to [5] and [7] for details); we replace the target successor feature parameters every training steps.
The objective function from Eq. (5) corresponds to learning the successor features via online Qlearning (with rewards ). The objective from Eq. (6) corresponds to learning the reward weights and the CNN feature mapping and consists of two parts: the first part ensures that the reward is regressed; the second part ensures that the features are representative of the state by enforcing that an inverse mapping from to exists through a third convolutional network, a decoder , whose parameters are also learned. After learning, actions can be chosen greedily from by inserting the approximated successor features into Eq. (3).
Iv Transferring successor features to new goals and tasks
As described above, the successor representation naturally decouples task specific reward estimation and the estimation of the expected occurrence of the features under the specific policy dynamics. This makes successor feature based RL a natural choice when aiming to transfer knowledge between related tasks. To see this let us first define two notions of knowledge transfer. In both cases we assume that the learning occurs in different stages during each of which the agent can interact with a distinct task . The aim for the agent is to solve all tasks at the end of training, using minimal interaction time for each task. From the perspective of reinforcement learning this setup corresponds to a sequence of RL problems which have shared structure. Knowledge transfer between tasks can then occur for two different scenarios:
The first, and simplest, notion of knowledge transfer occurs if all tasks use the same environment and transition dynamics and differ only in the reward function . In a navigation task this would be equivalent to finding paths to different goal positions in one single maze.
The second, and more general, notion of knowledge transfer occurs if all tasks use different environments (and potentially different reward functions) which share some similarities within their state space. In a navigation task this includes changing the maze structure or robot dynamics between different tasks.
We can observe that successor feature RL lends itself well to transfer learning in scenarios of the first kind: If the features are expressive enough to ensure that the rewards for all tasks can be linearly predicted from them then for all tasks following the first (i.e., for ) one only has to learn a new reward weight vector (keeping both the learned and fixed), although care has to be taken if the expectation of the features under the different policy changes (in which case the successor features would have to be adapted also). Learning for then boils down to solving a simple regression problem (i.e., minimizing Eq. (6) wrt. ) and requires only the storage of an additional weight vector per task. This idea has recently been explored [10, 7]. Kulkarni et al. [7] showed large learning speedups for a special case of this setting where they changed the scale of the final reward. We here argue that successor feature RL can be easily extended to transfer learning of the second kind with minimal additional memory and computational requirements.
Specifically, to derive a learning algorithm that works for both transfer scenarios let us first define the actionvalue function for task using the successor feature notation as
where we used the superscript to refer to task specific features and policies respectively and where we have again introduced the shorthand notation for notational brevity. Additionally, let us assume that there exists a linear relation between the task features, that is there exists a mapping for all and we have . We note that such a linear dependency between features does not imply a linear dependency between the observations (since is a nonlinear function implemented by a neural network), and hence this assumption is not very restrictive. Then – again using the fact that the expectation is a linear operator – we obtain for :
(7)  
(8) 
These equivalences now give us a straightforward way to transfer knowledge to new tasks while keeping the solution found for old tasks intact (as long as we have access to all feature mappings and policies ):

In addition, train all with to preserve the relation .

To obtain successor features for the previous tasks, estimate the expectation of the features for the current task under the old task policies to obtain so that Eq. (7) can be computed during evaluation. Note that this means we have to estimate the expectation of the current task features under all old task dynamics and policies^{2}^{2}2In principle, the expectations for all tasks need to be evaluated with samples from these tasks. In our case, we however found that the shared structure between tasks was large enough to allow for estimating all expectations based on the current tasks samples only.. Since we expect significant overlap between tasks in our experiments this can be implemented memory efficiently by using one single neural network with multiple output layers to implement all task specific successor features. Alternatively, if the successor feature networks are small, one can just preserve the old task successor feature networks and use Eq. (8) for selecting actions for old tasks.
When – as in Sec. IIIB – we are only interested in finding the optimal policy for each task these steps correspond to alternating stochastic gradient descent steps on two objective functions analogous to Eqs. (5),(6), under the model architecture depicted in Fig. 2. More precisely, we write and obtain the following objectives for task :
(9)  
(10) 
where is the current greedy best action for task and in cases where we are willing to store the old for Eq. (9) only needs to be optimized with respect to (dropping all other terms)^{3}^{3}3In practice there is no noticeable performance difference.. Several interesting details can be noted about this formulation. First, if we assume that all are implemented using one neural network with output layers – or if the successor feature networks are small – then the overhead for learning tasks is small (we only have to store additional weight matrices plus one additional reward weight vector per task) this is in contrast to other successful transfer learning approaches for RL that have recently been proposed such as [21]. Second, the regression of the old task features via the transformation matrices forces the CNN that outputs to represent the features for all tasks ^{4}^{4}4May be seen as special case of the distillation technique [30].. As such we expect this approach to work well when tasks have shared structure; if they have no shared structure one would have to increase the number of parameters (and thus possibly the dimensionality of ).
To gain some intuition for the reasons why the above model should work we here want to give a – hypothetical – example: Let us assume the set of extracted features to be the relative distance to a set of objects from the current position of the agent. Then, the successor features would estimate the discounted sum of those relative distances under the current policy dynamics. When transferring to a new environment, the spatial relationship of the objects could, for example, change. Then would need to adapt accordingly. But since we assume the two environments to share structure (e.g., they contain the same objects), filters in the early layers of could be largely reused (or transferred). The adapted features (e.g., the relative distances from the current pose to the changed object positions) now would differ from those of the previous environments, this change in scale could be directly captured by a linear mapping . would also need to be adapted, but due to the shared structure between environments and their similarity in the successor features we would expect adapting them to be fast. Similarly, the reward mapping can either be relearned quickly or transferred directly (e.g., if we assume that the reward penalizes proximity to objects).
V Simulated Experiments
Va Experimental setup
We first test our algorithm using a simulation of different mazelike 3D environments. The environment contains cubic objects and a target for the agent to reach (rendered as a green sphere) (cf. Fig. 4). We model the legal actions as four discrete choices: {stand still, turn left (), turn right (), go straight () } to simplify the problem (we note that in simulation the agent moves in a continuous manner). The agent is a simulated Pioneer3dx robot moving under a differential drive model (with Gaussian control noise, thus the robot will have observations of the environment from a continuous viewing position and angle space).
The agent obtains a reward of for each step it takes, for colliding with obstacles, for reaching the goal; this reward structure forces timeoptimal behavior. Each episode starts with the agent in a random location and ends when it reaches the goal (unless noted otherwise).
In each timestep the agent receives as an observation a frame captured from the forward facing camera (as shown in Fig. 4, rescaled to pixels). The state in each timestep is then given by the 4 most recently obtained observations. The topdown views of the four different mazes we consider are shown in Fig. 5.
For training the model (Fig. 2) we employed stochastic gradient descent with the ADAM optimizer [31], a minibatch size of and a learning rate of and for visual inputs for the supervised learner and the reinforcement learners respectively, for depth inputs. We performed a coarse grid search for each learning algorithm to choose the optimizer hyper parameters (learning rate in range ]) and use the same minibatch size across all considered approaches. Training was performed alongside exploration in the environment (one batch is considered every 4 steps).
VB Baseline method  supervised learning & DQN
As a baseline for our experiments, we train a CNN by supervised learning to directly predict the actions computed by an
planner from the same visual input that the SFRL model receives. The network structure is the same as the CNN from the SFRL model () and differs only in that the output 512 units are fed into a final softmax layer. As an additional baseline we also compare to the DQN approach
[5]. To ensure a fair comparison we evaluate DQN by learning from scratch and in a transfer learning situation in which we finetune the DQN model trained on the base task; such a finetuning approach is known to perform better than simply transferring with fixed features [32, 21] (we also conduct transfer learning experiments with fixed features for DQN for completeness).The training data for the supervised learner is generated beforehand, consisting of labeled samples. To generate these samples, full localization is required, while for evaluating the learned network it is not required. As such, this setup can be thought of as the best case scenario for training a CNN to imitate a planner in this domain.
To ensure a fair comparison between different methods in the following plots, we scale the number of steps taken by the supervised learner, so that the number of updates matches that of the SFRL model and that of DQN (the two reinforcement learners start learning at iterations and makes an update every steps after that).
VC Visual navigation in 3D mazes
For the first experiment we trained our deep successor feature reinforcement learner (SFRL), DQN and the supervised learner on the base map: Map1 (Fig. 4(a)). To compare the algorithms we perform a testing phase every 10,000 steps consisting of evaluating the performance of the current policy for 5,000 testing steps.
VC1 Base environment
We first train on Map1 from scratch. We observe that the supervised learning and reinforcement learning (DQN and SFRL) models converge to performance comparable to the optimal planner. We also observe that the supervised learner converges significantly faster in this experiment. This is to be expected since it has access to the optimal paths – as computed via – for starting positions covering the whole environment right from the beginning of training. In contrast to this, the reinforcement learners gradually have to build up a dataset of experience and can only make use of the sparsely distributed reward signal to evaluate the actions taken.
one standard deviation obtained by
using the true system model, the supervised learner, as well as DQN and SFRL when learning from scratch and with task transfer from Map1 (5(a)) and Map3 (5(b)).VC2 Transfer to different environment
We then perform a transfer learning experiment (using the trained models from above) to a changed environment Map2 where more walls are added (Fig. 4(b)). In Fig. 5(a) we show a performance comparison between the supervised learner (Supervised), DQN learning from scratch (DQN) and using task transfer (with fixed CNN layers: DQNFixFeature, and by finetuning the whole network: DQNFinetune), SFRL from scratch (SFRL) and using task transfer (SFRLTransfer).
We observe that SFRLTransfer converges to performance comparable to the optimal policy much faster than training from scratch. Furthermore, in Fig. 5(a) the learning speed of SFRLTransfer is even comparable to that of Supervised, who is learning directly from perfectly labeled actions. We observe that when training from scratch, DQN is slightly faster than SFRL (we attribute this to the fact that SFRL
optimizes a more complicated loss function including e.g. an autoencoder loss). In the transfer learning setting
SFRLTransfer is comparable to DQNFinetune, and converges faster than DQNFixFeature. It is important to realize that our method preserves the ability to solve the old task after this transfer occurred, which DQNFinetune is not capable of. To verify this preservation of the old policies we reevaluated DQNFinetune and SFRLTransfer on all tasks and summarize the results in Tab. I (DQNFixFeature keeps the network for the initial tasks completely unchanged thus it is unnecessary to evaluate its performance again). We note that our agent is still able to perform well on the old task, while the DQN agent deviated significantly from the optimal policy (it is still able to solve most of the episodes in this case via a “randomwalk”).We also want to emphasize that in contrast to DQNFixFeature, SFRLTransfer has the ability of continuously adapting its features to new tasks while keeping a mapping to all previous task features. Additionally, DQNFixFeature has to perform the same transfer procedure for all kinds of transfer scenarios due to its blackbox property; while with the flexibility of the more structured representation of SFRLTransfer, we only need to retrain the successor feature network and keep the reward mapping fixed when only the dynamics changes, or if the dynamics of the environment stay fixed or close to the already observed dynamics, SFRLTransfer can adapt quickly by either changing only or in combination with .
VD More complicated transfer scenarios
We then experiment in a more complicated transfer scenario: transferring a base controller from Map3 (Fig. 4(c)) to Map4 (Fig. 4(d)). As can be seen from the visualization, the objects change significantly from Map3 to Map4. Also, the goal location moves from the center of an open area to a “hidden” corner. The results for this experiment are depicted in Fig. 5(b), revealing a similar trend as for the simpler mazes. A reevaluation of the DQNFinetune and SFRLTransfer agent is shown in Tab. I. We note that the DQNFinetune agent loses the policy for Map3 after being transferred to Map4 as the locations of the target and objects changed dramatically, while our agent still is able to solve the old task after the transfer.
Furthermore, when transferring from Map1 to Map2 we move from a simpler to a more complicated environment, while Map4 is “simpler” than Map3.
Pretrain on / Transfer to  Success ratio  Reward  Steps 
Testing on Map1  
baseline  0.814 0.070  5.640 1.747  
DQNFinetune  
Map1 /   50/50  0.791 0.114  6.220 2.845 
Map1 / Map2  48/50  0.398 1.755  15.000 38.800 
SFRLTransfer  
Map1 /   50/50  0.765 0.243  6.410 3.915 
Map1 / Map2  50/50  0.733 0.235  6.796 2.999 
Testing on Map3  
baseline  0.635 0.138  10.120 3.438  
DQNFinetune  
Map3 /   50/50  0.566 0.178  11.84 4.442 
Map3 / Map4  4/50  18.335 5.703  460.46 135.450 
SFRLTransfer  
Map3 /   50/50  0.489 0.348  13.460 5.936 
Map3 / Map4  50/50  0.444 0.416  13.780 8.707 
Final testing statistics for all considered environments, each evaluated from 50 random starting positions. The maximum number of steps per episode was: 200 steps for
Map1&2, 500 steps for Map3&4.VE Analysis of learned representation
As an additional test, we analyzed the representation learned by the SFRL approach. Specifically, since the reward is defined on the pose of the agent and optimal path finding clearly depends on the agent being able to localize itself we analyzed as to whether encodes the robot pose. To answer this, we extract features for all states along collected optimal trajectories and regressed the ground truth poses of the robot (obtained from our simulator) using a neural network with two hidden layers (128 units each). Fig. 4 shows the results from this experiment, overlaying the ground truth poses with the predicted poses from our regressor on a held out example. From these we can conclude that indeed, the transition dynamics is encoded and the agent is able to localize itself, and this information can reliably be retrieved posthoc (i.e., after training).
Vi RealWorld Experiments
In order to show the applicability of our method to more realistic scenarios, we conducted additional experiments using a real robot. We start by swapping the RGB camera input for a simulated depth sensor in simulation and then perform a transfer learning experiment to a different, real, environment from which we collect real depth images.
Via Rendered Depth Experiments
To obtain a scenario more similar to a real world scene we might encounter, we build a mazelike environment Map5 (Fig. 7) in our robot simulator that includes realistic walls and object models. In this setting the robot has to navigate to the target (traffic cone in the center) and avoid colliding with objects and walls. We then simulate the robot within this environment, providing rendered depth images from a simulated kinect camera as the input modality (as opposed to the artificial RGB images we used before).

ViB Real World Transfer Experiments
We then change to a real robot experiment in which the robot can explore the maze depicted in Map6 (Fig. 1) (note that the position of the objects and the target are changed from the simulated environment Map5 in Fig. 7). We collect real depth images in the actual mazeworld using the onboard kinect sensor of a Robotino. To avoid training for long periods of time in the real environment we prerecorded images at all locations that the robot can explore (taking 100 images per position and direction with randomly perturbed robot pose to model noise).
The results of training from scratch in this real environment as well as when transfer from the simulated environment is performed are depicted in Fig. 8 (the agent starts to learn here after steps, whereas in previous experiments this number is set to ). Similar to the previous experiments we see a large speedup when transferring knowledge even though the simulated depth images contain none of the characteristic noise patterns present in the realworld kinect data. We note that the agent achieves satisfactory performance at around 60,000 iterations, which corresponds to approximately 8 hours of real experience (assuming data is collected at a rate of 2Hz).
After training with the prerecorded images, the robot is tested in real world environments. A video of the real experiments in two changed environments: Map6 & Map7 (Map7 is not discussed here due to space constraints) can be found at: https://youtu.be/WcCcdkhgjdY.
Vii Conclusion
We presented a method for solving robot navigation tasks from raw sensory data, based on an extension of the theory behind successor feature reinforcement learning. Our algorithm is able to naturally transfer knowledge between related tasks and yields substantial speedups over deep reinforcement learning from scratch in the experiments we performed. Despite of these encouraging results, there are several opportunities for future work including testing our approach in more complicated scenarios and extending it to more naturally handle partial observability.
References
 [1] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press, 2005.
 [2] S. M. LaValle, Planning Algorithms. Cambridge University Press, 2006.
 [3] J.C. Latombe, Robot Motion Planning. Kluwer, 1991.
 [4] J. Kober, J. A. Bagnell, and J. Peters, “Reinforcement learning in robotics: A survey,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1238–1274, 2013.
 [5] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al., “Humanlevel control through deep reinforcement learning,” Nature, vol. 518, no. 7540, 2015.
 [6] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in Proc. of the International Conference on Learning Representations (ICLR), 2016.
 [7] T. D. Kulkarni, A. Saeedi, S. Gautam, and S. J. Gershman, “Deep successor reinforcement learning,” arXiv preprint arXiv:1606.02396, 2016.

[8]
S. Levine, C. Finn, T. Darrell, and P. Abbeel, “Endtoend training of deep
visuomotor policies,”
Journal of Machine Learning Research (JMLR)
, 2016.  [9] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in Proceedings of the 32nd International Conference on Machine Learning (ICML), D. Blei and F. Bach, Eds. JMLR Workshop and Conference Proceedings, 2015.
 [10] A. Barreto, R. Munos, T. Schaul, and D. Silver, “Successor features for transfer in reinforcement learning,” arXiv preprint arXiv:1606.05312, 2016.

[11]
H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with
double qlearning,” in
Proc. of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI)
, 2016.  [12] Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas, “Dueling network architectures for deep reinforcement learning,” in Proc. of the 33nd International Conference on Machine Learning, (ICML), 2016.
 [13] T. Schaul, J. Quan, I. Antonoglou, and D. Silver, “Prioritized experience replay,” in Proc. of the International Conference on Learning Representations (ICLR), 2016.
 [14] R. S. Sutton, J. Modayil, M. Delp, T. Degris, P. M. Pilarski, A. White, and D. Precup, “Horde: a scalable realtime architecture for learning knowledge from unsupervised sensorimotor interaction.” in The 10th International Conference on Autonomous Agents and Multiagent SystemsVolume, 2011.
 [15] T. Schaul, D. Horgan, K. Gregor, and D. Silver, “Universal value function approximators,” in Proc. of the 32nd International Conference on Machine Learning (ICML), 2015.
 [16] M. Ring, “Continual learning in reinforcement environments,” PhD thesis, Oldenbourg Verlag, 1995.
 [17] M. E. Taylor and P. Stone, “An introduction to intertask transfer for reinforcement learning,” AI Magazine, vol. 32, no. 1, 2011.
 [18] A. Wilson, A. Fern, S. Ray, and P. Tadepalli, “Multitask reinforcement learning: a hierarchical Bayesian approach,” in Proc. of the 24th International Conference on Machine Learning, (ICML), 2007.
 [19] E. Parisotto, L. J. Ba, and R. Salakhutdinov, “Actormimic: Deep multitask and transfer reinforcement learning,” in Proc. of the International Conference on Learning Representations (ICLR), 2016.
 [20] A. A. Rusu, S. G. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu, and R. Hadsell, “Policy distillation,” in Proc. of the International Conference on Learning Representations (ICLR), 2016.
 [21] A. A. Rusu, N. C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, and R. Hadsell, “Progressive neural networks,” arXiv preprint arXiv:1606.04671, 2016.
 [22] L. Tai and M. Liu, “Towards cognitive exploration through deep reinforcement learning for mobile robots,” arXiv preprint arXiv:1610.01733, 2016.
 [23] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. FeiFei, and A. Farhadi, “Targetdriven visual navigation in indoor scenes using deep reinforcement learning,” arXiv preprint arXiv:1609.05143, 2016.
 [24] L. Tai and M. Liu, “Deeplearning in mobile roboticsfrom perception to control systems: A survey on why and why not,” arXiv preprint arXiv:1612.07139, 2016.
 [25] P. Dayan, “Improving generalization for temporal difference learning: The successor representation,” Neural Computation, vol. 5, no. 4, 1993.
 [26] M. Riedmiller, S. Lange, and A. Voigtlaender, “Autonomous reinforcement learning on raw visual input data in a real world application,” in IJCNN, 2012.
 [27] R. Jonschkowski and O. Brock, “Learning state representations with robotic priors,” Autonomous Robots, vol. 39, no. 3, 2015.

[28]
C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel, “Deep spatial autoencoders for visuomotor learning,” in
Proc. of the IEEE Int. Conf. on Robotics and Automation (ICRA), 2016.  [29] R. S. Sutton and A. G. Barto, Reinforcement Learning : An Introduction. MIT Press, 1998.
 [30] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” arXiv preprint arXiv:1503.02531, 2015.
 [31] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. of the International Conference on Learning Representations (ICLR), 2015.
 [32] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?” in Advances in Neural Information Processing Systems, 2014, pp. 3320–3328.