Complementary learning systems (McClelland et al., 1995, CLS) combine two mechanisms for learning: one, fast learning and highly adaptive but poor at generalising, the other, slow at learning and consequentially better at generalising across many examples. The need for two systems reflects the typical trade-off between the sample efficiency and the computational complexity of a learning algorithm. We argue that the majority of contemporary deep reinforcement learning systems fall into the latter category: slow, gradient-based updates combined with incremental updates from Bellman backups result in systems that are good at generalising, as evidenced by many successes (Mnih et al., 2015; Silver et al., 2016; Moravčík et al., 2017), but take many steps in an environment to achieve this feat.
RL methods are often categorised as either model-free methods or model-based RL methods (Sutton and Barto, 1998). In practice, model-free methods are typically fast at acting time, but computationally expensive to update from experience, whilst model-based methods can be quick to update but expensive to act with (as on-the-fly planning is required). Recently there has been interest in incorporating episodic memory-like into reinforcement learning algorithms (Blundell et al., 2016a; Pritzel et al., 2017), potentially providing increases in flexibility and learning speed, driven by motivations from the neuroscience literature known as Episodic Control (Dayan and Daw, 2008; Gershman and Daw, 2017). Episodic Control use episodic memory in lieu of a learnt model of the environment, aiming for a different computational trade-off to model-free and model-based approaches.
We will be interested in a hybrid approach, motivated by the observations of CLS (McClelland et al., 1995), where we will build an agent with two systems: one slow and general (model-free) and the other fast and adaptive (episodic control-like). Similar to previous proposals for agents, the fast, adaptive subsystem of our agent uses episodic memories to remember and later mimic previously experienced rewarding sequences of states and actions. This can be seen as a memory-based form of planning (Silver et al., 2008), in which related experiences are recalled to inform decisions. Planning in this context can be thought as the re-evaluation of the past experience using current knowledge to improve model-free value estimates.
Critical to many approaches to deep reinforcement learning is the replay buffer (Mnih et al., 2015; Espeholt et al., 2018). The replay buffer stores previously seen tuples of experience: state, action, reward, and next state. These stored experience tuples are then used to train a value function approximator using gradient descent. Typically one step of gradient descent on data from the replay buffer is taken per action in the environment, as (with the exception of (Barth-Maron et al., 2018)) a greater reliance on replay data leads to unstable performance. Consequently, we propose that the replay buffer may frequently contain information that could significantly improve the policy of an agent but never be fully integrated into the decision making of an agent. We posit that this happens for three reasons: (i) the slow, global gradient updates to the value function due to noisy gradients and the stability of learning dynamics, (ii) the replay buffer is of limited size and experience tuples are regularly removed (thus limiting the opportunity for gradient descent to learn from it), (iii) training from experience tuples neglects the trajectory nature of an agents experience: one tuple occurs after another and so information about the value of the next state should be quickly integrated into the value of the current state.
In this work we explore a method of allowing deep reinforcement learning agents to simultaneously: (i) learn the parameters of the value function approximation slowly, and (ii) adapt the value function quickly and locally within an episode. Adaptation of the value function is achieved by planning over previously experienced trajectories (sequences of temporally adjacent tuples) that are grounded in estimates from the value function approximation. This process provides a complementary way of estimating the value function.
Interestingly our approach requires very little modification of existing replay-based deep reinforcement learning agents: in addition to storing the current state and next state (which are typically large: full inputs to the network), we propose to also store trajectory information (pointers to successor tuples) and one layer of current hidden activations (typically much smaller than the state). Using this information our method adapts the value function prediction using memory-based rollouts of previous experience based on the hidden representation. The adjustment to the value function is not stored after it is used to take an action (thus it is ephemeral). We call our method Ephemeral Value Adjustment (EVA).
The action-value function of a policy is defined as (Sutton and Barto, 1998), where and are the initial state and action respectively, is a discount factor, and the expectation denotes that the is followed thereafter. Similarly, the value function under the policy at state is given by and is simply the expected return for following policy starting at state .
In value-based model-free reinforcement learning methods, the action-value function is represented using a function approximator. Deep Q-Network agents (Mnih et al., 2015, DQN) use Q-learning (Watkins and Dayan, 1992) to learn an action-value function to rank which action is best to take in each state at step .
is parameterised by a convolutional neural network (CNN), with parameters collectively denoted by, that takes a 2D pixel representation of the state
as input, and outputs a vector containing the value of each action at that state. The agent executes an
-greedy policy to trade-off exploration and exploitation: with probabilitythe agent picks an action uniformly at random, otherwise it picks the action .
When the agent observes a transition, DQN stores the tuple in a replay buffer, the contents of which are used for training. This neural network is trained by minimizing the squared error between the network’s output and the -learning target , for a subset of transitions sampled at random from the replay buffer. The target network is an older version of the value network that is updated periodically. It was shown by Mnih et al. (2015) that both, the use of a target network and sampling uncorrelated transitions from the replay buffer, are critical for stable training.
3 Ephemeral Value Adjustments
Ephemeral value adjustments are a way to augment an arbitrary value-based off-policy agent. This is accomplished through a trace computation algorithm, which rapidly produces value estimates by combining previously encountered trajectories with parametric estimates. Our agent consists of three components: a standard parametric reinforcement learner with its replay buffer augmented to maintains trajectory information, a trace computation algorithm that periodically plans over subsets of data in the replay buffer, a small value buffer which stores the value estimates resulting from the planning process. The overall policy of EVA is dictated by the action-value function,
is the value estimate from the parametric model andis the value estimate from the trace computation algorithm (non-parametric). Figure 1 (Right) shows a block diagram of the method. The parametric component of EVA consists of the standard DQN-style architecture, , a feedforward convolutional neural network: several convolution layers followed by two linear layers that ultimately produce action-value function estimates. Training is done exactly as in DQN, briefly reviewed in Section 2 and fully described in (Mnih et al., 2015).
3.1 Trajectory selection and planning
The second to final layer of the DQN network is used to embed the currently observed state (pixels) into a lower dimensional space. Note that similarity in this space has been optimised for action-value estimation by the parametric model. Periodically (every 20 steps in all the reported experiments), the k nearest neighbours in the global buffer are queried from the current state embedding (on the basis of their distance). Using the stored trajectory information, the 50 subsequent steps are also retrieved for each neighbour. Each of these k trajectories are passed to a trace computation algorithm (described below), and all of the resulting Q values are stored into the value buffer alongside their embedding. Figure 1 (Left) shows a diagram of this procedure. The non-parametric nature of this process means that while these estimates are less reliant on the accuracy of the parametric model, they are more relevant locally. This local buffer is meant to cache the results of the trace computation for states that are likely to be nearby the current state.
3.2 Computing value estimates on memory traces
By having the replay buffer maintain trajectory information, values can be propagated through time to produce trajectory-centric value estimates . Figure 1 (Right) shows how the value buffer is used to derive the action-value estimate. There are several methods for estimating this value function, we shall describe n-step, trajectory-centric planning (TCP) and kernel-based RL (KBRL) trace computation algorithms. N-step estimates for trajectories from the replay buffer are calculated as follows,
where is the length of the trajectory and are the states and rewards of the trajectory. These estimates utilise information in the replay buffer that might not be consolidated into the parametric model, and thus should be complementary to the purely parametric estimates. While this process will serve as a useful baseline, the n-step return just evaluates the policy defined by the sampled trajectory; only the initial parametric bootstrap involves an estimate of the optimal value function. W Ideally, the values at all time-steps should estimate the optimal value function,
Thus another way to estimate is to apply the Bellman policy improvement operator at each time step, as shown in (3). While (2) could be applied recursively, traversing the trajectory backwards, this improvement operator requires knowing the value of the counter-factual actions. We call this trajectory-centric planning. We propose using the parametric model for these off-trajectory value estimates, constructing the complete set of action-conditional value-estimates, called this trajectory-centric planning (TCP):
This allows for the same recursive application as before,
The trajectory-centric estimates for the k nearest neighbours are then averaged with the parametric estimate on the basis of a hyper-parameter , as shown in Algorithm 1 and represented graphically on Figure 1 (Left). Refer to the supplementary material for a detailed algorithm.
3.3 From trajectory-centric to kernel-based planning
The above method may seem ad hoc – why trust the on-trajectory samples completely and only utilise the parametric estimates for the counter-factual actions? Why not analyse the trajectories together, rather than treating them independently? To address these concerns, we propose a generalisation of the trajectory-centric method which extends kernel-based reinforcement learning (KBRL)(Ormoneit and Sen, 2002). KBRL is a non-parametric approach to planning with strong theoretical guarantees.111Convergence to a global optima assuming that underlying MDP dynamics are Lipschitz continuous, and the kernel is appropriately shrunk as a function of data.
For each action , KBRL stores experience tuples . Since is finite (equal to the number of stored transitions), and these states have known transitions, we can perform value iteration to obtain value estimates for all resultant states (the values of the origin states are not needed, as the Bellman equation only evaluates states after a transition). We can obtain an approximate version of the Bellman equation by using the kernel to compare all resultant states to all origin states, as shown in Equation 6. We define a similarity kernel on states (in fact, embeddings of the current state, as described above), , typically a Gaussian kernel. The action-value function of KBRL is then estimated using:
In effect, the stored ‘origin’ states () transition to some ‘resultant state’ () and get the stored reward. By using a similarity kernel , we can map resultant states to a distribution over the origin states. This makes the state transitions from instead of , meaning that all transitions only involve states that have been previously encountered.
In the context of trajectory-centric planning, KBRL can be seen as an alternative way of dealing with counter-factual actions: estimate their effects using nearby transitions. Additionally, KBRL is not constrained to dealing with individual trajectories, since it treats all transitions independently.
We propose to add an absorbing pseudo-state to KBRL’s model whose similarity to the other pseudo-states is fixed, that is, for some for all . Using this definition we can make KBRL softly blend similarity and parametric counter-factual action evaluation. This is accomplished by setting the pseudo-state’s value to be equal to the parametric value function evaluated at the state under comparison: when is being evaluated, thus by setting appropriately, we can guarantee that the parametric estimates will dominate when data density is low. Note that this is in addition to the blending of value functions described in Equation 1.
KBRL can be made numerically identical to trajectory-centric planning by shrinking the kernel bandwidth (i.e., the length scale of the Gaussian kernel) and pseudo-state similarity.222Modulo the fact that KBRL would still be able to find ‘shortcuts’ between or within trajectories owing to its exhaustive similarity comparisons between states With the appropriate values, this will result in value estimates being dominated by exact matches (on-trajectory) and parametric estimates when none are found. This reduction is of interest as KBRL is significantly more expensive than trajectory-centric planning. KBRL’s computational complexity is and trajectory-centric planning has a complexity of , where is the number of stored transitions and is the cardinality of the action space. We can thus think of this parametrically augmented version of KBRL as the theoretical foundation for trajectory-centric planning. In practice, we use the TCP trace computation algorithm (Equations 4 and 5) unless otherwise noted.
4 Related work
There has been a lot of recent work on using memory-augmented neural networks as a function approximation for RL agents: using LSTMs (Bakker et al., 2003; Hausknecht and Stone, 2015), or more sophisticated architectures (Graves et al., 2016; Oh et al., 2016; Wayne et al., 2018). However, the motivation behind these works is to obtain a better state representation in partially observable or non-Markovian environments, in which feed-forward models would not be appropriate. The focus of this work is on data efficiency, which is improved in a representation agnostic manner.
The main use of long term episodic memory is the replay buffer introduced by DQN.
While it is central to stable training, it also allows to significantly improve the data efficiency of the method, compare with the online counterparts that achieve stable training by having several actors (Mnih et al., 2016). The replay frequency is hyper-parameter that has been carefully tuned in DQN. Learning cannot be sped-up by increasing the frequency of replay without harming end performance. The problem is that the network would overfit to the content of the replay buffer affecting its ability to learn a better policy. An alternative approach is prioritised experience replay (Schaul et al., 2015), which changes the data distribution used during training by biasing it toward transitions with high temporal difference error. These works use the replay buffer during training time only. Our approach aims at leveraging the replay buffer at decision time and thus is complementary to prioritisation, as it impacts the behaviour policy but not how the replay buffer is sampled from (the supplementary materials for a preliminary comparison).
Using previous experience at decision time is closely related to non-parametric approaches for -function approximation (Santamaría et al., 1997; Munos and Moore, 1998; Gabel and Riedmiller, 2005). Our work is particularly related to techniques following the ideas of episodic control. Blundell et al. (2016b, MFEC) recently used local regression for -function estimation using the mean of the k-nearest neighbours searched over random projections of the pixel inputs. Pritzel et al. (2017) extended this line of work with NEC, using the reward signal to learn an embedding space in which to perform the local-regression. These works showed dramatic improvements in data efficiency, specially in early stages of training. This work differs from these approaches in that, rather than using memory for local regression, memory is used as a form of local planning, which is made possible by exploiting the trajectory structure of the memories in the replay buffer. Furthermore, the memory requirements of NEC is significantly larger than that of EVA. NEC uses a large memory buffer per action in addition to a replay buffer. Our work only adds a small overhead over the standard DQN replay buffer and needs to query a single replay buffer one time every several acting steps (20 in our experiments) during training. In addition, NEC and MFEC fundamentally change the structure of the model, whereas EVA is strictly supplemental. More recent works have looked at including NEC type of architecture to aid the learning of a parametric model (Nishio and Yamane, 2018; Jain and Lindsey, 2018), sharing memory requirements with NEC.
The memory-based planning aspect of our approach also has precedent in the literature. Brea (2017) explicitly compare a local regression approach (NEC) to prioritised sweeping and find that the latter is preferable, but fail to show scalable result. Savinov et al. (2018) build a memory-based graph and plan over it, but rely on a fixed exploration policy. Xiao et al. (2018) combine MCTS planning with NEC, but relies on a built-in model of the environment.
In the context of supervised learning, several works have looked at using non-parametric type of approaches to improve the performance of models using neural networks.Kaiser et al. (2016) introduced a differentiable layer of key-value pairs that can be plugged into a neural network to help it remember rare events. Works in the context of language modelling have augmented prediction with attention over recent examples to account for the distributional shift between training and testing settings, such as neural cache (Grave et al., 2016) and pointer sentinel networks (Merity et al., 2016). The work by Sprechmann et al. (2018) is also motivated by the CLS framework. However, they use an episodic memory to improve a parametric model in the context of supervised learning and do not consider reinforcement learning.
5.1 A simple example
We begin the experimental section by showing how EVA works on a simple “gridworld” environment implemented with the pycolab game engine (Stepleton, 2017). The task is to collect a given number of coins in the minimum number of steps possible, that can be thought as a very simple variant of the travel salesman problem. At the beginning of each episode, the agent and the coins are placed at a random location of a grid with size , see the supplementary material for a screen-shot. The agent can take four possible actions left, right, up, down and receives a reward of when collecting a coin and a reward of at every step. If the agent takes an action that would it move into a wall, it stays at its current position. We restrict the maximum length of an episode to steps. We use an agent featuring a two-layer convolutional neural network, followed by a fully connected layer producing a 64-dimensional embedding which is then used for the look-ups in the replay buffer of size . The input is an RGB image of the maze. Results are reported in Figure 5.
Evaluation of a single episode
We use the same pre-trained network (with its corresponding replay buffer) and run a single episode with and without using EVA, see Figure 5 (Left). We can see that, by leveraging the trajectories in the replay buffer, EVA immediately boosts performance of the baseline. Note that the weights of the network are exactly the same in both cases. The benefits saturate around , which suggests that the policy of the non-parametric component alone is unable to generalise properly.
Evaluation of the full EVA algorithm
Figure 5 (Center, Left) shows the performance of EVA on ful episodes using one and two coins evaluating different values of the mixing parameter . corresponds to the standard DQN baseline. We show the hyper-parameters that lead to the highest end performance of the baseline DQN. We can see that EVA provides a significant boost in data efficiency. For the single coin case, it requires slightly more than half of the data to obtain final performance and higher value of lambda is better. This is likely due to the fact that there are only unique states, thus all states are likely to be in the replay buffer. On the two case setting, however, the number of possible states for the two coin case is approximately , which is significantly larger than the replay buffer size. Again here, performance saturates around .
5.2 EVA and Atari games
In order to validate whether EVA leads to gains in complex domains we evaluated our approach on the Atari Learning Environment(ALE; Bellemare et al., 2013). We used the set of 55 Atari Games, please see the supplementary material for details. The hyper-parameters were tuned using a subset of 5 games (Pong, H.E.R.O., Frostbite, Ms Pacman and Qbert). The hyper-parameters shared between the baseline and EVA (e.g. learning rate) were chosen to maximise the performance of the baseline () on a run over frames on the selected subset of games. The influence of these hyper-parameters on EVA and the baseline are highly correlated. Performance saturates around as in the simple example. We chose the lowest frequency that would not harm performance (20 steps), the rollout length was set to and the number of neighbours used for estimating was set to 5. We observed that performance decreases as the number of neighbours increases. See the supplementary material for details on all hyper-parameters used.
We compared absolute performance of agents according to human normalised score as in Mnih et al. (2015). Figure 7 summarises the obtained results, where we ran three random seeds for (which is our version of DQN) and EVA with
. In order to obtain uncertainty estimates, we report the mean and standard deviation per time step of the curves obtained by randomly selecting one random seed per game (this is, one out of three possible seeds for each of the 55 games). For reference, we also included the original DQN results from(Mnih et al., 2015). EVA is able to improve the learning speed as well as the final performance level using exactly the same architecture and learning parameters as our baseline. It is able to achieve the end performance of the baseline in million frames.
Effect of trace computation
To understand how EVA helps performance, we compare three different versions of the trace computation at the core of the EVA approach. The standard (trajectory-centric) trace computation can be simplified by removing the parametric evaluations of counter-factual actions. This ablation results in the n-step trace computation (as shown in 2). Since the standard trace computation can be seen as a special-case of parametrically-augmented KBRL, we also consider this trace computation. Due to the increased computation of this trace computation, these experiments are only run for million frames. For parametrically-augmented KBRL, a Gaussian similarity kernel is used with a bandwidth parameter of and a paramteric similarity of .
EVA is significantly worse than the baseline with the n-step trace computation. This can be seen as evidence for the importance of the parametric evaluation of counter-factual actions. Without this additional computation, EVA’s policy is too dependant on the quality of the policy expressed in the trajectories, a negative feedback loop that results in divergence on several games. Interesting, the standard trace computation is as good as, if not better than, the much more costly KBRL method. While KBRL is capable of merging the data from the different trajectories into a global plan, it does not given on-trajectory information a privileged status without an extremely small bandwidth 333To achieve this privileged status for on-trajectory information, the minimum off-trajectory similarity must be known, and typically results in bandwidth so small as to be numerically unstable
. In near-deterministic environments like Atari, this privileged status is appropriate and acts as a strong prior, as can be seen in the lower variance of this method.
EVA relies in the TCP at decision time. However, one would expect that after training, the parametric model would be able to consolidate the information available on the episodic memory and be capable of acting without relying on the planning process. We verified that annealing the value of to zero over two million steps leads to no degradation in performance on our Atari experiments. Note that when our agent reduces to the standard DQN agent.
Despite only changing the value function underlying the behaviour policy, EVA improves the overall rate of learning. This is due to two factors. The first is that the adjusted policy should be closer to the optimal policy by better exploiting the information in the replay data. The second is that this improved policy should fill the replay buffer with more useful data. This means that the ephemeral adjustments indirectly impact the parametric value function by changing the distribution of data that it is trained on.
During the training process, as the agent explores the environment, knowledge about value functions are extracted gradually from the interactions with the environment. Since the value-function drives the data acquisition process, the ability to quickly incorporate on highly rewarded experiences could significantly boost the sample efficiency of the learning process.
The authors would like to thank Melissa Tan, Paul Komarek, Volodymyr Mnih, Alistair Muldal, Adrià Badia, Hado van Hasselt, Yotam Doron, Ian Osband, Daan Wierstra, Demis Hassabis, Dharshan Kumaran, Siddhant Jayakumar, Razvan Pascanu, and Oriol Vinyals. Finally, we thank the anonymous reviewers for their comments and suggestions to improve the paper.
- McClelland et al.  James L McClelland, Bruce L McNaughton, and Randall C O’Reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3):419, 1995.
- Mnih et al.  Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015.
- Silver et al.  David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
Moravčík et al. 
Matej Moravčík, Martin Schmid, Neil Burch, Viliam Lisỳ, Dustin
Morrill, Nolan Bard, Trevor Davis, Kevin Waugh, Michael Johanson, and Michael
Deepstack: Expert-level artificial intelligence in heads-up no-limit poker.Science, 356(6337):508–513, 2017.
- Sutton and Barto  Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1998.
- Blundell et al. [2016a] Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, and Demis Hassabis. Model-free episodic control. arXiv preprint arXiv:1606.04460, 2016a.
- Pritzel et al.  Alexander Pritzel, Benigno Uria, Sriram Srinivasan, Adrià Puigdomènech, Oriol Vinyals, Demis Hassabis, Daan Wierstra, and Charles Blundell. Neural episodic control. ICML, 2017.
- Dayan and Daw  Peter Dayan and Nathaniel D Daw. Decision theory, reinforcement learning, and the brain. Cognitive, Affective, & Behavioral Neuroscience, 8(4):429–453, 2008.
- Gershman and Daw  Samuel J Gershman and Nathaniel D Daw. Reinforcement learning and episodic memory in humans and animals: an integrative framework. Annual review of psychology, 68:101–128, 2017.
Silver et al. 
David Silver, Richard S Sutton, and Martin Müller.
Sample-based learning and search with permanent and transient
Proceedings of the 25th international conference on Machine learning, pages 968–975. ACM, 2008.
- Espeholt et al.  Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018.
- Barth-Maron et al.  Gabriel Barth-Maron, Matthew W Hoffman, David Budden, Will Dabney, Dan Horgan, Alistair Muldal, Nicolas Heess, and Timothy Lillicrap. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018.
- Watkins and Dayan  Christopher JCH Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.
- Ormoneit and Sen  Dirk Ormoneit and Śaunak Sen. Kernel-based reinforcement learning. Machine learning, 49(2-3):161–178, 2002.
- Bakker et al.  Bram Bakker, Viktor Zhumatiy, Gabriel Gruener, and Jürgen Schmidhuber. A robot that reinforcement-learns to identify and memorize important previous observations. In Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, volume 1, pages 430–435. IEEE, 2003.
- Hausknecht and Stone  Matthew Hausknecht and Peter Stone. Deep recurrent q-learning for partially observable mdps. arXiv preprint arXiv:1507.06527, 2015.
- Graves et al.  Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471–476, 2016.
- Oh et al.  Junhyuk Oh, Valliappa Chockalingam, Honglak Lee, et al. Control of memory, active perception, and action in minecraft. In Proceedings of The 33rd International Conference on Machine Learning, pages 2790–2799, 2016.
- Wayne et al.  Greg Wayne, Chia-Chun Hung, David Amos, Mehdi Mirza, Arun Ahuja, Agnieszka Grabska-Barwinska, Jack Rae, Piotr Mirowski, Joel Z Leibo, Adam Santoro, et al. Unsupervised predictive memory in a goal-directed agent. arXiv preprint arXiv:1803.10760, 2018.
- Mnih et al.  Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016.
- Schaul et al.  Tom Schaul, John Quan, Ioannis Antonoglou, and David Silver. Prioritized experience replay. CoRR, abs/1511.05952, 2015.
- Santamaría et al.  Juan C Santamaría, Richard S Sutton, and Ashwin Ram. Experiments with reinforcement learning in problems with continuous state and action spaces. Adaptive behavior, 6(2):163–217, 1997.
Munos and Moore 
Remi Munos and Andrew W Moore.
Barycentric interpolators for continuous space and time reinforcement learning.In NIPS, pages 1024–1030, 1998.
- Gabel and Riedmiller  Thomas Gabel and Martin Riedmiller. Cbr for state value function approximation in reinforcement learning. In International Conference on Case-Based Reasoning, pages 206–221. Springer, 2005.
- Blundell et al. [2016b] Charles Blundell, Benigno Uria, Alexander Pritzel, Yazhe Li, Avraham Ruderman, Joel Z Leibo, Jack Rae, Daan Wierstra, and Demis Hassabis. Model-free episodic control. arXiv preprint arXiv:1606.04460, 2016b.
- Nishio and Yamane  Daichi Nishio and Satoshi Yamane. Faster deep q-learning using neural episodic control. arXiv preprint arXiv:1801.01968, 2018.
- Jain and Lindsey  Mika Sarkin Jain and Jack Lindsey. Semiparametric reinforcement learning. ICLR 2018 Workshop, 2018.
- Brea  Johanni Brea. Is prioritized sweeping the better episodic control? arXiv preprint arXiv:1711.06677, 2017.
- Savinov et al.  Nikolay Savinov, Alexey Dosovitskiy, and Vladlen Koltun. Semi-parametric topological memory for navigation. arXiv preprint arXiv:1803.00653, 2018.
- Xiao et al.  Chenjun Xiao, Jincheng Mei, and Martin Müller. Memory-augmented monte carlo tree search. AAAI, 2018.
- Kaiser et al.  Lukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. 2016.
- Grave et al.  Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. arXiv preprint arXiv:1612.04426, 2016.
- Merity et al.  Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
- Sprechmann et al.  Pablo Sprechmann, Siddhant M Jayakumar, Jack W Rae, Alexander Pritzel, Adrià Puigdomènech Badia, Benigno Uria, Oriol Vinyals, Demis Hassabis, Razvan Pascanu, and Charles Blundell. Memory-based parameter adaptation. ICLR, 2018.
- Stepleton  Tom Stepleton. The pycolab game engine. https://github.com/deepmind/pycolab/tree/master/pycolab, 2017.
- Bellemare et al.  Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res.(JAIR), 47:253–279, 2013.
7 Detailed Algorithms
8 Simple example
The task is to collect a given number of coins in the minimum number of steps possible, that can be thought as a very simple variant of the travel salesman problem. At the beginning of each episode, the agent and the coins are placed at a random location of a grid with size . An example of the environment with random initial location for the agent (cyan square) and the coins (yellow square) is shwon in Figure 5. The purple squares correspond to walls. The agent can take four possible actions left, right, up, down and receives a reward of when collecting a coin and a reward of at every step. If the agent takes an action that would it move into a wall, it stays at its current position. We restrict the maximum length of an episode to steps.
9 Atari Experiment Details
Human normalised scores are 100 for human level performance and 0 for a random agent.
For our Atari experiments, we used all the preprocessing steps used in DQN except for termination of life loss. Our DQN implementation is slightly different from the original DQN implementation. DQN runs a single environment and does one batch of replay every 4 agent steps (i.e. every 16 frames). We run 4 environments in parallel and do replay every agent step, this means that the ratio of replay to number of frames seen is roughly the same as in the original DQN implementation. In the figures in the main paper this is denoted as DQN(ours). We found this change to be beneficial in terms of runtime, as it allows us to batch the observations before passing them to the neural network. Also our evaluation procedure differs in so far that the original DQN implementation trains the agent for 1 Million frames and then evaluates the scores over 500 thousand frames to get a score, we just accumulate episode scores during training and report the average in the last training period. We found this speeds up the computation, while not majorly impacting scores. We list all our hyper-parameters in Table (1
). Here ’no training period’ denotes the number of frames before we start using replay. We only apply EVA once the replay buffer is fully occupied, i.e. after 500k steps (or 2M frames). We are using Adam as an optimizer with all settings being the tensorflow default, except for the learning rate. As in DQN we are using a target network, however we update every 50 steps. We found this to better for us in combination with Adam.
|No training period||40000|
|Replay buffer capacity||500000|
|Value buffer size||2000|
|Training batch size||48|
|Target network period||50|
|Number of parallel environments||4|
|Filter sizes||[8, 4, 3]|
|[4, 2, 1]|
|Channels||[16, 32, 32]|
|Number of fully connected activations|||
10 Additional Baselines
We want to highlight that EVA should not be seen as an alternative to variants of DQN, but rather as a strategy that could be easily combined with any of them. In fact, since EVA provides a way of exploiting the replay buffer to improve data efficiency, it can be plugged in any existing algorithm that uses this device. All that said, a comparative experiment is useful to provide intuition on the proposed method, so here we provide a preliminary comparison to other DQN enhancements. Figure 7 shows results for Double DQN (DDQN) and DDQN trained with prioritized replay (DDQN+PR).444These curves were provided by the authors of the original papers, but unfortunately they did not provide curves for PR alone.
We observe that EVA+DQN significantly outperforms DDQN in early stages of training, but achieves a lower final score. Inspired in CLS, EVA is supposed to be particularly helpful in terms of data efficiency, which we find satisfying. Although these comparisons are interesting in themselves, we emphasize that the most meaningful comparisons would be between DDQN and EVA+DDQN and between DDQN+PR and EVA+DDQN+PR. DDQN+PR achieved higher performance than either approach in isolation, and we are confident that EVA will boost the performance of both DDQN and DDQN+PR as well, as using the replay buffer to augment the behavior policy should not interfere with the modified parameter updates used by DDQN nor the skewed data distribution induced by PR. We believe that showing the complementary effects of EVA with other algorithms is a worthy pursuit for future work, but we want to emphasize that this paper is focused on conceptual clarity in presenting EVA, and so such additional experiments are out of scope.