DeepAI
Log In Sign Up

Averaged-DQN: Variance Reduction and Stabilization for Deep Reinforcement Learning

11/07/2016
by   Oron Anschel, et al.
0

Instability and variability of Deep Reinforcement Learning (DRL) algorithms tend to adversely affect their performance. Averaged-DQN is a simple extension to the DQN algorithm, based on averaging previously learned Q-values estimates, which leads to a more stable training procedure and improved performance by reducing approximation error variance in the target values. To understand the effect of the algorithm, we examine the source of value function estimation errors and provide an analytical comparison within a simplified model. We further present experiments on the Arcade Learning Environment benchmark that demonstrate significantly improved stability and performance due to the proposed extension.

READ FULL TEXT VIEW PDF
10/14/2019

On the Reduction of Variance and Overestimation of Deep Q-Learning

The breakthrough of deep Q-Learning on different types of environments r...
05/09/2018

Reward Estimation for Variance Reduction in Deep Reinforcement Learning

In reinforcement learning (RL), stochastic environments can make learnin...
04/24/2020

Evolution of Q Values for Deep Q Learning in Stable Baselines

We investigate the evolution of the Q values for the implementation of D...
03/20/2022

MicroRacer: a didactic environment for Deep Reinforcement Learning

MicroRacer is a simple, open source environment inspired by car racing e...
01/22/2019

Understanding Multi-Step Deep Reinforcement Learning: A Systematic Study of the DQN Target

Multi-step methods such as Retrace(λ) and n-step Q-learning have become ...
09/15/2018

Deterministic Implementations for Reproducibility in Deep Reinforcement Learning

While deep reinforcement learning (DRL) has led to numerous successes in...

Code Repositories

AveragedDQN-Implementation-using-PyTorch

None


view repo

1 Introduction

In Reinforcement Learning (RL) an agent seeks an optimal policy for a sequential decision making problem (Sutton & Barto, 1998). It does so by learning which action is optimal for each environment state. Over the course of time, many algorithms have been introduced for solving RL problems including Q-learning (Watkins & Dayan, 1992), SARSA (Rummery & Niranjan, 1994; Sutton & Barto, 1998), and policy gradient methods (Sutton et al., 1999). These methods are often analyzed in the setup of linear function approximation, where convergence is guaranteed under mild assumptions (Tsitsiklis, 1994; Jaakkola et al., 1994; Tsitsiklis & Van Roy, 1997; Even-Dar & Mansour, 2003). In practice, real-world problems usually involve high-dimensional inputs forcing linear function approximation methods to rely upon hand engineered features for problem-specific state representation. These problem-specific features diminish the agent flexibility, and so the need of an expressive and flexible non-linear function approximation emerges. Except for few successful attempts (e.g., TD-gammon, Tesauro (1995)), the combination of non-linear function approximation and RL was considered unstable and was shown to diverge even in simple domains (Boyan & Moore, 1995).

The recent Deep Q-Network (DQN) algorithm (Mnih et al., 2013)

, was the first to successfully combine a powerful non-linear function approximation technique known as Deep Neural Network (DNN)

(LeCun et al., 1998; Krizhevsky et al., 2012) together with the Q-learning algorithm. DQN presented a remarkably flexible and stable algorithm, showing success in the majority of games within the Arcade Learning Environment (ALE) (Bellemare et al., 2013)

. DQN increased the training stability by breaking the RL problem into sequential supervised learning tasks. To do so, DQN introduces the concept of a target network and uses an Experience Replay buffer (ER)

(Lin, 1993).

Following the DQN work, additional modifications and extensions to the basic algorithm further increased training stability. Schaul et al. (2015) suggested sophisticated ER sampling strategy. Several works extended standard RL exploration techniques to deal with high-dimensional input (Bellemare et al., 2016; Tang et al., 2016; Osband et al., 2016). Mnih et al. (2016) showed that sampling from ER could be replaced with asynchronous updates from parallel environments (which enables the use of on-policy methods). Wang et al. (2015) suggested a network architecture base on the advantage function decomposition (Baird III, 1993).

In this work we address issues that arise from the combination of Q-learning and function approximation. Thrun & Schwartz (1993) were first to investigate one of these issues which they have termed as the overestimation phenomena. The max operator in Q-learning can lead to overestimation of state-action values in the presence of noise. Van Hasselt et al. (2015) suggest the Double-DQN that uses the Double Q-learning estimator (Van Hasselt, 2010) method as a solution to the problem. Additionally, Van Hasselt et al. (2015) showed that Q-learning overestimation do occur in practice (at least in the ALE).

This work suggests a different solution to the overestimation phenomena, named Averaged-DQN (Section 3), based on averaging previously learned Q-values estimates. The averaging reduces the target approximation error variance (Sections 4 and 5) which leads to stability and improved results. Additionally, we provide experimental results on selected games of the Arcade Learning Environment.

We summarize the main contributions of this paper as follows:

  • A novel extension to the DQN algorithm which stabilizes training, and improves the attained performance, by averaging over previously learned Q-values.

  • Variance analysis that explains some of the DQN problems, and how the proposed extension addresses them.

  • Experiments with several ALE games demonstrating the favorable effect of the proposed scheme.

2 Background

In this section we elaborate on relevant RL background, and specifically on the Q-learning algorithm.

2.1 Reinforcement Learning

We consider the usual RL learning framework (Sutton & Barto, 1998). An agent is faced with a sequential decision making problem, where interaction with the environment takes place at discrete time steps (). At time the agent observes state , selects an action , which results in a scalar reward , and a transition to a next state . We consider infinite horizon problems with a discounted cumulative reward objective , where is the discount factor. The goal of the agent is to find an optimal policy that maximize its expected discounted cumulative reward.

Value-based methods for solving RL problems encode policies through the use of value functions, which denote the expected discounted cumulative reward from a given state , following a policy . Specifically we are interested in state-action value functions:

The optimal value function is denoted as , and an optimal policy can be easily derived by .

2.2 Q-learning

One of the most popular RL algorithms is the Q-learning algorithm (Watkins & Dayan, 1992). This algorithm is based on a simple value iteration update (Bellman, 1957), directly estimating the optimal value function . Tabular Q-learning assumes a table that contains old action-value function estimates and preform updates using the following update rule:

(1)

where is the resulting state after applying action in the state , is the immediate reward observed for action at state , is the discount factor, and is a learning rate.

When the number of states is large, maintaining a look-up table with all possible state-action pairs values in memory is impractical. A common solution to this issue is to use function approximation parametrized by , such that .

2.3 Deep Q Networks (DQN)

We present in Algorithm 1 a slightly different formulation of the DQN algorithm (Mnih et al., 2013). In iteration the DQN algorithm solves a supervised learning problem to approximate the action-value function (line 6). This is an extension of implementing (1) in its function approximation form (Riedmiller, 2005).

1:  Initialize with random weights
2:  Initialize Experience Replay (ER) buffer
3:  Initialize exploration procedure Explore()
4:  for  do
5:     
6:     
7:     Explore(), update
8:  end for
8:  
Algorithm 1 DQN

The target values (line 5) are constructed using a designated target-network (using the previous iteration parameters ), where the expectation () is taken w.r.t. the sample distribution of experience transitions in the ER buffer

. The DQN loss (line 6) is minimized using a Stochastic Gradient Descent (SGD) variant, sampling mini-batches from the ER buffer. Additionally, DQN requires an exploration procedure (which we denote as

Explore()) to interact with the environment (e.g., an -greedy exploration procedure). The number of new experience transitions added by exploration to the ER buffer in each iteration is small, relatively to the size of the ER buffer. Thereby, can be used as a good initialization for in iteration .

Note that in the original implementation (Mnih et al., 2013, 2015)

, transitions are added to the ER buffer simultaneously with the minimization of the DQN loss (line 6). Using the hyperparameters employed by

Mnih et al. (2013, 2015) (detailed for completeness in Appendix D), 1% of the experience transitions in ER buffer are replaced between target network parameter updates, and 8% are sampled for minimization.

3 Averaged DQN

The Averaged-DQN algorithm (Algorithm 2) is an extension of the DQN algorithm. Averaged-DQN uses the previously learned Q-values estimates to produce the current action-value estimate (line 5). The Averaged-DQN algorithm stabilizes the training process (see Figure 1), by reducing the variance of target approximation error as we elaborate in Section 5. The computational effort compared to DQN is, -fold more forward passes through a Q-network while minimizing the DQN loss (line 7). The number of back-propagation updates (which is the most demanding computational element), remains the same as in DQN. The output of the algorithm is the average over the last previously learned Q-networks.


Figure 1: DQN and Averaged-DQN performance in the Atari game of Breakout. The bold lines are averages over seven independent learning trials. Every 1M frames, a performance test using -greedy policy with

for 500000 frames was conducted. The shaded area presents one standard deviation. For both DQN and Averaged-DQN the hyperparameters used were taken from

Mnih et al. (2015).
1:  Initialize with random weights
2:  Initialize Experience Replay (ER) buffer
3:  Initialize exploration procedure Explore
4:  for  do
5:     
6:     
7:     
8:     Explore(), update
9:  end for
9:  
Algorithm 2 Averaged DQN

In Figures 1 and 2 we can see the performance of Averaged-DQN compared to DQN (and Double-DQN), further experimental results are given in Section 6.

We note that recently-learned state-action value estimates are likely to be better than older ones, therefore we have also considered a recency-weighted average. In practice, a weighted average scheme did not improve performance and therefore is not presented here.


Figure 2: DQN, Double-DQN, and Averaged-DQN performance (left), and average value estimates (right) in the Atari game of Asterix. The bold lines are averages over seven independent learning trials. The shaded area presents one standard deviation. Every 2M frames, a performance test using -greedy policy with for 500000 frames was conducted. The hyperparameters used were taken from Mnih et al. (2015).

4 Overestimation and Approximation Errors

Next, we discuss the various types of errors that arise due to the combination of Q-learning and function approximation in the DQN algorithm, and their effect on training stability. We refer to DQN’s performance in the Breakout game in Figure 1. The source of the learning curve variance in DQN’s performance is an occasional sudden drop in the average score that is usually recovered in the next evaluation phase (for another illustration of the variance source see Appendix A). Another phenomenon can be observed in Figure 2, where DQN initially reaches a steady state (after 20 million frames), followed by a gradual deterioration in performance.

For the rest of this section, we list the above mentioned errors, and discuss our hypothesis as to the relations between each error and the instability phenomena depicted in Figures 1 and 2.

We follow terminology from Thrun & Schwartz (1993), and define some additional relevant quantities. Letting be the value function of DQN at iteration , we denote and decompose it as follows:

Here is the DQN target, and is the true target:

Let us denote by the target approximation error, and by the overestimation error, namely

The optimality difference can be seen as the error of a standard tabular Q-learning, here we address the other errors. We next discuss each error in turn.

4.1 Target Approximation Error (TAE)

The TAE (), is the error in the learned relative to , which is determined after minimizing the DQN loss (Algorithm 1 line 6, Algorithm 2 line 7). The TAE is a result of several factors: Firstly, the sub-optimality of due to inexact minimization. Secondly, the limited representation power of a neural net (model error). Lastly, the generalization error for unseen state-action pairs due to the finite size of the ER buffer.

The TAE can cause a deviations from a policy to a worse one. For example, such deviation to a sub-optimal policy occurs in case and,

We hypothesize that the variability in DQN’s performance in Figure 1, that was discussed at the start of this section, is related to deviating from a steady-state policy induced by the TAE.

4.2 Overestimation Error

The Q-learning overestimation phenomena were first investigated by Thrun & Schwartz (1993). In their work, Thrun and Schwartz considered the TAE

as a random variable uniformly distributed in the interval

. Due to the operator in the DQN target , the expected overestimation errors are upper bounded by (where is the number of applicable actions in state ). The intuition for this upper bound is that in the worst case, all values are equal, and we get equality to the upper bound:

The overestimation error  is different in its nature from the TAE since it presents a positive bias that can cause asymptotically sub-optimal policies, as was shown by Thrun & Schwartz (1993), and later by Van Hasselt et al. (2015) in the ALE environment. Note that a uniform bias in the action-value function will not cause a change in the induced policy. Unfortunately, the overestimation bias is uneven and is bigger in states where the Q-values are similar for the different actions, or in states which are the start of a long trajectory (as we discuss in Section 5 on accumulation of TAE variance).

Following from the above mentioned overestimation upper bound, the magnitude of the bias is controlled by the variance of the TAE.

The Double Q-learning and its DQN implementation (Double-DQN) (Van Hasselt et al., 2015; Van Hasselt, 2010) is one possible approach to tackle the overestimation problem, which replaces the positive bias with a negative one. Another possible remedy to the adverse effects of this error is to directly reduce the variance of the TAE, as in our proposed scheme (Section 5).

In Figure 2 we repeated the experiment presented in Van Hasselt et al. (2015) (along with the application of Averaged-DQN). This experiment is discussed in Van Hasselt et al. (2015) as an example of overestimation that leads to asymptotically sub-optimal policies. Since Averaged-DQN reduces the TAE variance, this experiment supports an hypothesis that the main cause for overestimation in DQN is the TAE variance.

5 TAE Variance Reduction

To analyse the TAE variance we first must assume a statistical model on the TAE, and we do so in a similar way to Thrun & Schwartz (1993). Suppose that the TAE is a random process such that , , and for : . Furthermore, to focus only on the TAE we eliminate the overestimation error by considering a fixed policy for updating the target values. Also, we can conveniently consider a zero reward everywhere since it has no effect on variance calculations.

Denote by

the vector of value estimates in iteration

(where the fixed action is suppressed), and by the vector of corresponding TAEs. For Averaged-DQN we get:

where

is the transition probabilities matrix for the given policy. Assuming stationarity of

, its covariance can be obtained using standard techniques (e.g., as a solution of a linear equations system). However, to obtain an explicit comparison, we further specialize the model to an -state unidirectional MDP as in Figure 3


Figure 3: states unidirectional MDP, The process starts at state , then in each time step moves to the right, until the terminal state is reached. A zero reward is obtained in any state.

5.1 DQN Variance

We assume the statistical model mentioned at the start of this section. Consider a unidirectional Markov Decision Process (MDP) as in Figure

3, where the agent starts at state , state is a terminal state, and the reward in any state is equal to zero.

Employing DQN on this MDP model, we get that for :

where in the last equality we have used the fact for all (terminal state). Therefore,

The above example gives intuition about the behavior of the TAE variance in DQN. The TAE is accumulated over the past DQN iterations on the updates trajectory. Accumulation of TAE errors results in bigger variance with its associated adverse effect, as was discussed in Section 4.

1:  Initialize Q-networks with random weights for
2:  Initialize Experience Replay (ER) buffer
3:  Initialize exploration procedure Explore()
4:  for  do
5:     
6:     
7:     for  do
8:        
9:     end for
10:     Explore(), update
11:  end for
11:  
Algorithm 3 Ensemble DQN

5.2 Ensemble DQN Variance

We consider two approaches for TAE variance reduction. The first one is the Averaged-DQN and the second we term Ensemble-DQN. We start with Ensemble-DQN which is a straightforward way to obtain a variance reduction, with a computational effort of -fold learning problems, compared to DQN. Ensemble-DQN (Algorithm 3) solves DQN losses in parallel, then averages over the resulted Q-values estimates.

For Ensemble-DQN on the unidirectional MDP in Figure 3, we get for :

where for : and are two uncorrelated TAEs. The calculations of are detailed in Appendix B.

5.3 Averaged DQN Variance

We continue with Averaged-DQN, and calculate the variance in state for the unidirectional MDP in Figure 3. We get that for :

where , with

denoting a Discrete Fourier Transform (DFT) of a rectangle pulse, and

. The calculations of and are more involved and are detailed in Appendix C.

Furthermore, for we have that (Appendix C) and therefore the following holds

meaning that Averaged-DQN is theoretically more efficient in TAE variance reduction than Ensemble-DQN, and at least times better than DQN. The intuition here is that Averaged-DQN averages over TAEs averages, which are the value estimates of the next states.


Figure 4: The top row shows Averaged-DQN performance for the different number of averaged networks on three Atari games. For Averaged-DQN is reduced to DQN. The bold lines are averaged over seven independent learning trials. Every 2M frames, a performance test using -greedy policy with for 500000 frames was conducted. The shaded area presents one standard deviation. The bottom row shows the average value estimates for the three games. It can be seen that as the number of averaged networks is increased, overestimation of the values is reduced, performance improves, and less variability is observed. The hyperparameters used were taken from Mnih et al. (2015).

Game DQN Averaged-DQN Averaged-DQN Averaged-DQN Human Random
Avg. (std. dev.) (K=5) (K=10) (K=15)
Breakout 245.1     (124.5) 381.5     (20.2) 381.8     (24.2) - - 31.8 1.7
Seaquest 3775.2     (1575.6) 5740.2     (664.79 ) 9961.7     (1946.9) 10475.1     (2926.6) 20182.0 68.4
Asterix 195.6     (80.4) 6960.0     (999.2) 8008.3     (243.6) 8364.9     (618.6) 8503.0 210.0
Table 1: The columns present the average performance of DQN and Averaged-DQN after 120M frames, using -greedy policy with for 500000 frames. The standard variation represents the variability over seven independent trials. Average performance improved with the number of averaged networks. Human and random performance were taken from Mnih et al. (2015).

6 Experiments

The experiments were designed to address the following questions:

  • How does the number of averaged target networks affect the error in value estimates, and in particular the overestimation error.

  • How does the averaging affect the learned polices quality.

To that end, we ran Averaged-DQN and DQN on the ALE benchmark. Additionally, we ran Averaged-DQN, Ensemble-DQN, and DQN on a Gridworld toy problem where the optimal value function can be computed exactly.

6.1 Arcade Learning Environment (ALE)

To evaluate Averaged-DQN, we adopt the typical RL methodology where agent performance is measured at the end of training. We refer the reader to Liang et al. (2016) for further discussion about DQN evaluation methods on the ALE benchmark. The hyperparameters used were taken from Mnih et al. (2015), and are presented for completeness in Appendix D. DQN code was taken from McGill University RLLAB, and is available online111McGill University RLLAB DQN Atari code: https://bitbucket.org/rllabmcgill/atari_release.
Averaged-DQN code https://bitbucket.org/oronanschel/atari_release_averaged_dqn
(together with Averaged-DQN implementation).

We have evaluated the Averaged-DQN algorithm on three Atari games from the Arcade Learning Environment (Bellemare et al., 2013). The game of Breakout was selected due to its popularity and the relative ease of the DQN to reach a steady state policy. In contrast, the game of Seaquest was selected due to its relative complexity, and the significant improvement in performance obtained by other DQN variants (e.g., Schaul et al. (2015); Wang et al. (2015)). Finally, the game of Asterix was presented in Van Hasselt et al. (2015) as an example to overestimation in DQN that leads to divergence.

As can be seen in Figure 4 and in Table 1 for all three games, increasing the number of averaged networks in Averaged-DQN results in lower average values estimates, better-preforming policies, and less variability between the runs of independent learning trials. For the game of Asterix, we see similarly to Van Hasselt et al. (2015) that the divergence of DQN can be prevented by averaging.

Overall, the results suggest that in practice Averaged-DQN reduces the TAE variance, which leads to smaller overestimation, stabilized learning curves and significantly improved performance.

6.2 Gridworld

The Gridworld problem (Figure 5) is a common RL benchmark (e.g., Boyan & Moore (1995)). As opposed to the ALE, Gridworld has a smaller state space that allows the ER buffer to contain all possible state-action pairs. Additionally, it allows the optimal value function to be accurately computed.

For the experiments, we have used Averaged-DQN, and Ensemble-DQN with ER buffer containing all possible state-action pairs. The network architecture that was used composed of a small fully connected neural network with one hidden layer of 80 neurons. For minimization of the DQN loss, the ADAM optimizer

(Kingma & Ba, 2014) was used on 100 mini-batches of 32 samples per target network parameters update in the first experiment, and 300 mini-batches in the second.

6.2.1 Environment Setup

In this experiment on the problem of Gridworld (Figure 5), the state space contains pairs of points from a 2D discrete grid (). The algorithm interacts with the environment through raw pixel features with a one-hot feature map . There are four actions corresponding to steps in each compass direction, a reward of in state , and otherwise. We consider the discounted return problem with a discount factor of .


Figure 5: Gridworld problem. The agent starts at the left-bottom of the grid. In the upper-right corner, a reward of +1 is obtained.

6.2.2 Overestimation

In Figure 6 it can be seen that increasing the number of averaged target networks leads to reduced overestimation eventually. Also, more averaged target networks seem to reduces the overshoot of the values, and leads to smoother and less inconsistent convergence.


Figure 6: Averaged-DQN average predicted value in Gridworld. Increasing the number of averaged target networks leads to a faster convergence with less overestimation (positive-bias). The bold lines are averages over 40 independent learning trials, and the shaded area presents one standard deviation. In the figure, A,B,C,D present DQN, and Averaged-DQN for K=5,10,20 average overestimation.

6.2.3 Averaged versus Ensemble DQN

In Figure 7, it can be seen that as was predicted by the analysis in Section 5, Ensemble-DQN is also inferior to Averaged-DQN regarding variance reduction, and as a consequence far more overestimates the values. We note that Ensemble-DQN was not implemented for the ALE experiments due to its demanding computational effort, and the empirical evidence that was already obtained in this simple Gridworld domain.


Figure 7: Averaged-DQN and Ensemble-DQN predicted value in Gridworld. Averaging of past learned value is more beneficial than learning in parallel. The bold lines are averages over 20 independent learning trials, where the shaded area presents one standard deviation.

7 Discussion and Future Directions

In this work, we have presented the Averaged-DQN algorithm, an extension to DQN that stabilizes training, and improves performance by efficient TAE variance reduction. We have shown both in theory and in practice that the proposed scheme is superior in TAE variance reduction, compared to a straightforward but computationally demanding approach such as Ensemble-DQN (Algorithm 3). We have demonstrated in several games of Atari that increasing the number of averaged target networks leads to better policies while reducing overestimation. Averaged-DQN is a simple extension that can be easily integrated with other DQN variants such as Schaul et al. (2015); Van Hasselt et al. (2015); Wang et al. (2015); Bellemare et al. (2016); He et al. (2016). Indeed, it would be of interest to study the added value of averaging when combined with these variants. Also, since Averaged-DQN has variance reduction effect on the learning curve, a more systematic comparison between the different variants can be facilitated as discussed in (Liang et al., 2016).

In future work, we may dynamically learn when and how many networks to average for best results. One simple suggestion may be to correlate the number of networks with the state TD-error, similarly to Schaul et al. (2015). Finally, incorporating averaging techniques similar to Averaged-DQN within on-policy methods such as SARSA and Actor-Critic methods (Mnih et al., 2016) can further stabilize these algorithms.

References

Appendix A DQN Variance Source Example

Figure 8 presents a single learning trial of DQN compared to Averaged-DQN, which emphasizes that the source of variability in DQN between learning trials is due to occasions drops in average score within the learning trial. As suggested in Section 4, this effect can be related to the TAE causing to a deviation from the steady state policy.

Appendix B Ensemble DQN TAE Variance Calculation in a unidirectional MDP (Section 5.2)

Recall that , , for all : , and for all : . Following the Ensemble-DQN update equations in Algorithm 3:

By iteratively expanding as above, and noting that for all times (terminal state), we obtain,

Since the TAEs are uncorrelated by assumption, we get


Figure 8: DQN and Averaged-DQN performance in the Atari game of Breakout. The bold lines are single learning trials of the DQN and Averaged-DQN algorithm. The dashed lines present average of 7 independent learning trials. Every 1M frames, a performance test using -greedy policy with for 500000 frames was conducted. The shaded area presents one standard deviation (from the average). For both DQN and Averaged-DQN the hyperparameters used, were taken from Mnih et al. (2015).

Appendix C Averaged DQN TAE Variance Calculation in a unidirectional MDP (Section 5.3)

Recall that , , and for : . Further assume that for all : . Following the Averaged-DQN update equations in Algorithm 2:

By iteratively expanding as above, and noting that for all times (terminal state), we get

Since the TAEs in different states are uncorrelated by assumption the latter sums are uncorrelated and we may calculate the variance of each separately. For , denote

where to simplify notation are independent and identically distributed (i.i.d. ) TAEs random variables, with and .

Since the random variables are zero-mean and i.i.d. we have that:

where is the number of times is counted in the multiple summation above. The problem is reduced now to evaluating for , where is the number of solutions for the following equation:

(2)

over . The calculation can be done recursively, by noting that

Since the final goal of this calculation is bounding the variance reduction coefficient, we will calculate the solution in the frequency domain where the bound can be easily obtained.

Denote

For (base case), we trivially get that for any :

and we can rewrite our recursive formula for as:

where is the discrete convolution.

To continue, we denote the Discrete Fourier Transform (DFT) of as , and by using Parseval’s theorem we have that

where is the length of the vectors and and is taken large enough so that the sum includes all non-zero elements of the convolution. We denote , and now we can write Averaged-DQN variance as:

Next we bound in order to compare Averaged-DQN to Ensemble-DQN, and to DQN.

For :

where we have used the easily verified facts that , only if , and Parseval’s theorem again.

Appendix D Experimental Details for the Arcade Learning Environment Domain

We have selected three popular games of Atari to experiment with Averaged-DQN. We have used the exact setup proposed by Mnih et al. (2015) for the experiments, we only provide the details here for completeness. The full implementation is available at https://bitbucket.org/oronanschel/atari_release_averaged_dqn.

Each episode starts by executing a no-op action for one up to 30 times uniformly. We have used a frame skipping where each agent action is repeated four times before the next frame is observed. The rewards obtained from the environment are clipped between -1 and 1.

d.1 Network Architecture

The network input is a 84x84x4 tensor. It contains a concatenation of the last four observed frames. Each frame is rescaled (to a 84x84 image), and gray-scale. We have used three convolutional layers followed by a fully-connected hidden layer of 512 units. The first convolution layer convolves the input with 32 filters of size 8 (stride 4), the second, has 64 layers of size 4 (stride 2), the final one has 64 filters of size 3 (stride 1). The activation unit for all of the layers a was Rectifier Linear Units (ReLu). The fully connected layer output is the different Q-values (one for each action). For minimization of the DQN loss function RMSProp (with momentum parameter 0.95) was used.

d.2 Hyper-parameters

The discount factor was set to = 0.99, and the optimizer learning rate to = 0.00025. The steps between target network updates were 10,000. Training is done over 120M frames. The agent is evaluated every 1M/2M steps (according to the figure). The size of the experience replay memory is 1M tuples. The ER buffer gets sampled to update the network every 4 steps with mini batches of size 32. The exploration policy used is an -greedy policy with decreasing linearly from 1 to 0.1 over 1M steps.