Recently, reinforcement learning (RL) using deep neural networks (Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016) has achieved massive success in control systems consisting of complex input states and actions, and applied to various research fields (Silver et al., 2016; Abbeel et al., 2007). The RL problem is not easy to directly solve via cost minimization problem because of the constraint that it is difficult to immediately obtain the output according to the input. Therefore, various methods such as -learning (Bellman, 1957) and policy gradient (Sutton et al., 1999) have been proposed to solve the RL problems.
The recent neural-network (NN)-based RL methods (Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016) approximate the dynamic-programming-based (DP-based) optimal reinforcement learning (Jaakkola et al., 1994) through the neural network. However, this process has the problem that the -values for independent state-action pairs are correlated, which violates the independence assumption. Thus, this process is no longer optimal (Werbos, 1992). This results in differences in performance and convergence time depending on the experiences used to train the network. Hence, the effective selection of the experiences becomes crucial for successful training of the deep RL framework.
To gather the experiences, most deep-learning-based RL algorithms(Mnih et al., 2013; Van Hasselt et al., 2016; Mnih et al., 2016; Schulman et al., 2015; Riedmiller, 2005) have utilized experience memory in the learning process (Lin, 1992), which stores batches of state-action pairs (experiences) that frequently appear in the network for repetitive use in the future learning process. Also, in (Schaul et al., 2015), a method of prioritized experience memory that finds priorities of each experience is proposed, based on which a batch is created. Eventually, the key to creating such a memory is to compute the priorities of the credible experiences so that learning can focus on the reliable experiences.
However, in the existing methods, just a few episodes have meaningful information, and the usability of the gathered episodes are highly algorithm-specific. It can be largely inefficient compared to humans who can select semantically meaningful (credible) events for learning the proper behaviors, regardless of the training method. Figure 1 shows the examples of the case. In the situation shown in the left image, choosing an action does not bring much difference to the agent. However, in the case of the right image, the result (bomb or money) of choosing an action (left or right) can be significantly different for the agent, and it is natural to think that the later case is much important for deciding the movement of the agent.
Inspired by the observation, in this paper, we propose a method of extracting and storing important episodes that are invariant to diverse RL algorithms. First, we propose an importance and a priority measures that can capture the semantically important episodes during entire experiences. More specifically, in this paper, the importance of a state is measured by the difference of the rewards resultant from different actions and the priority of a state is defined as the product of importance and the frequency of the state in episodes.
Then, we gather experiences during an arbitrary deep RL learning procedure, and store them into dictionary-type memory called ‘BOOK’ (Brief Organization of Obtained Knowledge). The process of generating a BOOK during the learning of a writer agent will be termed as ’Writing the BOOK’ in the followings. The stored episodes are quantized with respect to the state, and the quantized states are used as a key in the book memory. All the experiences in the BOOK are dynamically updated by upcoming experiences having the same key. To efficiently manage the episode in the BOOK, some linguistics inspired terms such as linguistic function and state are proposed.
We have shown that the ’BOOK’ memory is particularly effective for two aspects. First, we can use the memory as a good initialization data for diverse RL training algorithms, which enables fast convergence. Second, we can achieve compatible, and sometimes higher performances by only using the experiences in the memory when training a RL network, compared to the case that entire experiences are used. The experiences stored in the memory is usually a few hundred times smaller compared to the experiences required in usual random-batch-based RL training (Mnih et al., 2013; Van Hasselt et al., 2016), and hence give us much effectiveness in time and memory space required for the training.
The contributions of the proposed method are as follows:
(1) The dictionary termed as BOOK that stores the credible experience, which is useful for diverse RL network training algorithms, expressed by the tuple (cluster of states, action, and the corresponding -value) is proposed.
(2) The method for measuring the credibility: importance and priority terms of each experience valid for arbitrary RL training algorithms, is proposed.
(3) The training method for RL that utilizes the BOOK is proposed, which is inspired by DP and is applicable to diverse RL algorithms.
To show the efficiency of the proposed method, it is applied to the major deep RL methods such DQN (Mnih et al., 2013) and A3C (Mnih et al., 2016). The qualitative as well as the quantitative performances of the proposed method are validated through the experiments on public environments published by OpenAI (Brockman et al., 2016).
The goal of RL is to estimate the sequential actions of an agent that maximize cumulative rewards given a particular environment. In RL, Markov decision process (MDP) is used to model the motion of an agent in the environment. It is defined by thestate , action which occurs in the state , and the corresponding reward , at a time step . 111 and denote the real and natural numbers respectively. We term the function that maps the action for a given as the policy, and the future state is defined by the pair of the current state and the action, . Then, the overall cost for the entire sequence from the MDP is defined as the accumulated discounted reward, , with a discount factor .
Therefore, we can solve the RL problem by finding the optimal policy that maximizes the cost . However, it is difficult to apply the conventional optimization methods in finding the optimal policy. It is because we should wait until the agent reaches the terminal state to see the cost resulting from the action of the agent at time . To solve the problem in a recursive manner, we define the function denoting the expected accumulated reward for with a policy . Then, we can induce the recurrent Bellman equation (Bellman, 1957):
It is proven that the -value, , for all time step satisfying (1) can be calculated by applying dynamic programming (DP), and the resultant -values are optimal (Jaakkola et al., 1994). However, it is practically impossible to apply the DP method when the number of state is large, or the state is continuous. Recently, the methods such as Deep -learning (DQN), Double Deep--learning (DDQN) solve the RL problem with complex state by using approximate DP that trains -network. The -network is designed so that it calculates the -value for each action when a state is given. Then, the -network is trained by the temporal difference (TD) (Watkins & Dayan, 1992) method which reduces the gap between -values acquired from the -network and those from (1).
3 Related Work
Recently, deep learning methods (Mnih et al., 2013; Hasselt, 2010; Van Hasselt et al., 2016; Wang et al., 2015; Mnih et al., 2016; Schaul et al., 2015; Salimans et al., 2017) have improved performance by incorporating neural networks to the classical RL methods such as Q-learning (Watkins & Dayan, 1992), SARSA (Rummery & Niranjan, 1994), evolution learning (Salimans et al., 2017), and policy searching methods (Williams, 1987; Peters et al., 2003) which use TD (Sutton, 1988).
Mnih et al. (2013), Hasselt (2010) and Van Hasselt et al. (2016) replaced the value function of Q-learning with a neural network by using a TD method. Wang et al. (2015) proposed an algorithm that shows faster convergence than the method based on Q-learning by applying dueling network method (Harmon et al., 1995). Furthermore, Mnih et al. (2016) applied the asynchronous method to Q-learning, SARSA, and Advantage Actor-Critic models.
The convergence and performance of deep-learning-based methods are greatly affected by input data which are used to train an approximated solution (Bertsekas & Tsitsiklis, 1995) of classical RL methods. Mnih et al. (2013) and Van Hasselt et al. (2016) solved the problem by saving experience as batch in the form of experience replay memory (Lin, 1993). In addition, Prioritized Experience Replay (Schaul et al., 2015) achieved higher performance by applying replay memory to recent Q-learning based algorithms by calculating priority based on the importance of experience. Pritzel et al. (2017) proposed a Neural episodic control (NEC) to apply tabular based Q-learning method for training the Q-network by first, semantically clustering the states and then, updates the value entities of the clusters.
Also, imitation learning (Ross & Bagnell, 2014; Krishnamurthy et al., 2015; Chang et al., 2015) which solves problems through expert’s experience is one of the main research flows. This method trains a new agent in a supervised manner using state-action pairs obtained from the expert agent and shows faster convergence speed and better performance using experiences of the expert. However, it is costly to gather experiences from experts.
The goal of our work is different to the mentioned approaches as follows. (1) compared to imitation learning, the proposed method differs in the aspect that credible data are extracted from the past data in an unsupervised manner, and more importantly, (2) compared to the prioritized experience replay (Schaul et al., 2015), our work proposes a method to generate a memory that stores core experiences useful for training diverse RL algorithms. (3) Also, compared to the NEC (Pritzel et al., 2017), our work aims to use the BOOK memory for good initialization and fast convergence, when training the RL network regardless of the algorithm used. but, the dictionary of NEC can not provide all the information necessary for learning, such as states, so it is difficult to use it to train other RL networks.
4 Proposed Method
In this paper, our algorithm aims to find the core experience through many experiences and write it into a BOOK, which can be used to share knowledge with other agents that possibly use different RL algorithms. Figure 2 describes the main flow of the proposed algorithm. First, from the RL network, the terminated episodes of a writer agent are extracted. Then, among experiences from the episodes, the core and credible experiences are gathered and stored into the BOOK memory. In this process, using the semantic cluster of states as a key, the BOOK stores the value information of the experiences related to the semantic cluster. This ‘writing’ process is iterated until the end of training. Then, the final BOOK is ‘published’ with the top core experiences of this memory, that can be directly exploited in the ‘training’ of other reader RL agents.
In the following subsections, how to design the BOOK and how to use BOOK in the training of RL algorithms are described in more detail.
4.1 Desigining the BOOK Structrue
Given a state and action , we define the memory termed as ‘BOOK’ which stores the credible experience in the form appropriate for lookup-table inspired RL. Assuming there exists semantic correlation among states, the input state can be clustered into the core clusters . To reduce the semantic redundancy, the BOOK stores the information related to the cluster , and the corresponding information is updated by the information of the states included in the cluster. It means that the memory space of the BOOK in the ‘writing’ process is . To map the state to the cluster , we define the mapping function , where denotes the representative value of the cluster . We term the mapping function , the representative , and the reward of as linguistic function, linguistic state, and linguistic reward, respectively 222The term ‘linguistic’ is used to represent both characteristics of ‘abstraction’ and ‘shared rule’.. To cluster the states and define the linguistic function, arbitrary clustering methods or quantization can be applied. For simplicity, we adopt the quantization in this paper.
Consequently, the element of a BOOK is defined as , where and denote the -value of and the hit frequency of the . Then, the information regarding the input state is stored into , where . The -value is iteratively updated by which denotes the -value from the credible experience .
4.2 Iterative Update of the BOOK using Credible Experiences
To fill the BOOK memory by credible experiences, we first extract the credible experiences from the entire possible experiences. We extract the credible experiences based on the observation that the terminated episode333An episode denotes a sequence of state-action-reward until termination. holds valid information to judge whether an agent’s action was good or bad. At least in the terminal state, we can evaluate whether the state-action pair performed good or bad by just observing the result of the final action; for example, success or failure. Once we get credible experience from terminal sequences, then we can get the related credible experiences using the upcoming equation (2). More specifically, the BOOK is updated using the experience from the terminated episode in backward order, i.e., from to , where . Consider that for an experience at time , the current state, current action, and the future state are and , respectively. Also, assume that . Then, the -value stored in the content is updated by
Here, refers to the hit frequency of the content . The term denotes the estimated -value of acquired from the RL network. In (4), is initialized to when the term regarding is not yet stored in the BOOK. We note that we calculate from in backward manner, because only the terminal experience is fully credible among the episode acquired from the RL network. The update rule for the frequency term in the content is defined as
where is the predefined limit of the frequency . The frequency is reduced by 1 for every predefined number of episodes to avoid from being continually increasing. To extract the episode , we can use arbitrary deep RL algorithm based on -network. Algorithm 1 summarizes the procedure of writing a BOOK.
4.3 Priority Based Contents Recoding
In many cases, the number of clusters becomes large, and it is clearly inefficient to store all the contents without considering the priority of a cluster. Hence, we maintain the efficiency of BOOK by continuously removing contents with lower priority from the BOOK. In our method, the priority is defined by the product of the frequency term and the importance term ,
The importance term reflects the maximum gap of reward for choosing an action for a given linguistic state , as the following:
In Fig. 1, we can see the concept of the importance term. At the first crossroad (state) in the left, the penalty of choosing different branches (actions) is not severe. However, at the second crossroad, it is very important to choose a proper action given the state. Obviously, the situation in the right image is much crucial, and the RL should train the situation more carefully. Now, we can keep the size of the BOOK as we want by eliminating the contents with lower priority (left image in the figure).
4.4 Publishing a BOOK
We have seen how to write a BOOK in the previous subsections. In the ‘writing’ stage in Fig. 2, it limits the contents to be kept according to priority, but maintains a considerable capacity to compare information of various states. However, our method finally publish the BOOK with only the top priority states with the same rule as the subsection 7 after learning of the writer agent. We have shown through experiments that we can obtain good performance even if a relatively small-sized BOOK is used for training. See section 5 for more detailed analysis.
4.5 Training Reader Network using the BOOK
As shown in Figure 2, we train the RL network using the BOOK structure that stores the experience from the episode. The BOOK records the information of the representative states that is useful for RL training. The information required to learn the general reinforcement learning algorithm can be obtained in the form of or through our recorded data. Here, and are the value of the state and the advantage of the state-action pair .
To utilize the BOOK in the learning of the environment, the linguistic state has to be converted to the real state . The state can be decoded by implementing the inverse function , or one of the state can be stored in the BOOK as a sample when the BOOK is made.
In the first case of using -value in the training, the recorded information can be used as it is. In the second case, is calculated as the weighted sum of the and the difference between the -value and the state value is used as the advantage as follows:
A BOOK stores only the measured (experienced) data regardless of the RL model without bootstrapping. The learning method of each model is used as it is, in the training using the BOOK. Since DQN (Mnih et al., 2013) requires state, action and Q-value in learning, it learns by decoding this information in the BOOK. On the other hand, A3C (Mnih et al., 2016) and Dueling DQN (Wang et al., 2015) require state, action, state-value and advantage , so these decode the corresponding information in the BOOK as shown in equations (8) and (9). Because a BOOK has all the information needed to train an RL agent, the agent is not required to interact with the environment while learning the BOOK.
We note that our learning process shares the essential philosophy with the classical DP in that the learning process explores the state-action space based on credible stored in the BOOK without bootstrapping and dynamically updates the values in the solution space using the stored information. As verified by the experiments, we confirmed that our methods achieved better performance with much smaller iteration compared to the existing approximated DP-based RL algorithms (Mnih et al., 2013, 2016).
To show the effectiveness of the proposed concept of BOOK, we tested our algorithm on 4 problems from 3 domains. These are carpole (Barto et al., 1983), acrobot (Geramifard et al., 2015), Box2D (Catto, 2011) lunar lander, and Q*bert from Atari 2600 games. All the experiments were performed using OpenAI gym (Brockman et al., 2016).
The purpose of the experiments is to answer the following questions: (1) Can we effectively represent valuable information for RL among the entire state-action space and find important states? If so, can this information be effectively transfered to train other RL agent? (2) Can the information generated in this way be utilized to train the network in different architecture? For example, can a BOOK generated by DQN be effectively used to train A3C network?
5.1 Performance Analysis
In these experiments, we first trained the conventional network of A3C or DQN. During the training of the conventional writer network, a BOOK is written. Then, we tested the effectiveness of this BOOK with two different scenarios. First, we trained the RL networks using only the contents of the BOOK as described in Section 4.5. For the second scenario, we conducted additional training for the RL networks that are already trained using the BOOK at first scenario.
Performance of BOOK based learning: Table 1 shows the performance when training the conventional RL algorithm with only the contents of the BOOK. The BOOK is written in the training of writer network with DQN and A3C and published in size of 1,000. Then, reader networks were trained with this BOOK using several different algorithms such as DQN, A3C, and Dueling DQN. This normally took much less time (less than 1 minute in all experiments) than the training of the conventional network from the start without utilizing BOOK. Then, we tested the performance of 100 random episodes without updating the network. The column ‘Score’ in the table shows the average score of this setting. The ‘Transition’ indicates the number of transitions (timesteps) that each network has to go through to achieve the same score without BOOK. The ’Ratio’ means the ratio of the book size over transition to confirm the sample efficiency of our method. For example, if Dueling DQN learns the BOOK of size 1,000 from A3C in Q*bert, it can get the score of 388.1. If this network learns without BOOK, it has to go through 1,080K transitions. The ratio is 0.09%, which is 1,000 / 1,080K.
As shown in Table 1, even if RL agents only learn the small-sized BOOK, they can obtain scores similar to those of scores obtained when learning dozen to thousands of times more transitions. In the Cartpole environment, particularly, all models obtained the highest score of 500, except when DQN learn the BOOK written by DQN.
However, the obtained scores are quite different depending on the model that wrote the BOOK and the model that learned the BOOK. In most environments and training models, learning the BOOK written by A3C is better than learning the BOOK written by DQN. Also, even if the same BOOK is used, the performances are different according to the training algorithm. DQN has lower performance than A3C or Dueling DQN in most environments. The major difference in each method is that DQN uses only Q value, and A3C and Dueling DQN use state-value and advantage. Dueling DQN got good scores in most environments, but in the case of Acrobot, using the BOOK by DQN, it was lower than all other models. This indicates that the information stored in the BOOK can be more or less useful depending on the reader RL method.
: The network was trained using conventional method after learning the BOOK; An epoch corresponds to one hundred thousand transitions (across all threads). Light colors represent the raw scores and dark colors are smoothed scores.
Performance of additional training after learning the BOOK: The graphs in Figure 3 show the performance when the BOOK is used for pre-training the conventional RL networks. After learning the BOOK, each network is trained by each network-specific method. For this study, we conducted the experiments with two different settings: (1) training the RL network using the BOOK generated by the same learning method, (2) training the RL network using the BOOK generated by the different learning method. For the first setting, we trained the network and BOOK using A3C (Mnih et al., 2016), while in the second, we generated the BOOK using DQN (Mnih et al., 2013) and trained the network with A3C (Mnih et al., 2016).
The results of these two different settings are the upper and the lower rows of Figure 3, respectively. In the upper row, the ‘blue’ line shows the score achieved through training an A3C network from scratch, the ‘yellow’ horizontal line shows the base score which can be achieved through training other A3C network only with a BOOK which is published by a trained A3C network. The ‘red’ line shows the additional training results after training the A3C network with BOOK. In the lower row, the three lines mean the same with the upper row except that the BOOK is published by a different RL network, DQN.
As shown in Figure 3, the scores achieved from pre-trained networks using a BOOK were almost the same as the highest scores achieved from conventional methods. Furthermore, additional training on the pre-trained networks was quite effective since they achieved higher scores than conventional methods as training progresses. Especially, BOOK was very powerful when it is applied to a simple environment like Cartpole, which achieved much higher score than conventional training methods. Some experiments show that the maximum score of ’BOOK + A3C’ is same with that of ’A3C’ but this is because their environments have a limited maximum score. Also, almost every experiments show that the red score starts from lower than the yellow baseline as additional training progresses. It may seem weired but it is very natural phenomenon for the following reasons: (1) As additional training begins, exploration is performed. (2) BOOK stores Q value with actual reward without bootstrapping, but DQN and A3C use bootstrapped Q value, thus they (actual and bootstrapped Q-values) don’t match exactly.
5.2 Qualitative Analysis
To further investigate the characteristics of the proposed method, we conducted some experiments by changing the hyper-parameters.
Learning with different sizes of BOOKs: To investigate the effect of the BOOK size, we tested the performance of the proposed method using the published BOOK size of 250, 500, 1000, and 2000. Table 2 shows the score obtained by our baseline network which was trained using only the BOOK in a specified size. Also, in the table, we showed the number of transitions (experiences) that a conventional A3C has to go through to achieve the same score. This result shows that a relatively small number of linguistic states can achieve a score similar to that of the conventional network with only the published BOOK. As shown in the table, training an agent in a complex environment requires more information and therefore a larger BOOK is needed.
Effects of different quantization levels: In this experiment, we confirmed the performance difference according to the resolutions of linguistic function. First of all, we differentiated the quantization level and published a BOOK of 1,000 size to check the difference of performance according to the resolutions of linguistic function. Figure 4
(a) shows the distribution of scores according to the quantization level (quartile bar) and the average number of hits in each linguistic stateincluded in the BOOK (red line).
From Fig. 4(a), we found that the number of hit for each linguistic state decreases exponentially as the quantization level increases. Also, when the quantization level is high, the importance of in equation (7) couldn’t be defined and its score decreased because hit ratio becomes low. It can be seen that the highest and stable scores are obtained at quantization level of 64 and 128.
Comparison of the priority methods: Also, to verify the usefulness of our priority method (6), we tested the algorithm with different design of the priority; random selection, frequency only, method from prioritized experience replay (Schaul et al., 2015), and the proposed priority method. A book capacity was set to 10,000 for this test.
As shown in Figure 4(b), the algorithm applying the proposed priority term achieved clearly far superior performance than other settings. We note that the case of using only frequency term marked the lowest performance, even lower than the random case. This is because the learning process proceeds only with the experiences that appear frequently when the priority is set only by the frequency. Correspondingly, the information of the critical, but rarely occurred experiences are not reflected enough to the training and hence, leads to inferior performance.
The priority term of the prioritized experience replay also marked poor results. It is better than using frequency only as a priority, but even lower than random selection. This algorithm is intended to give priority to the states that are not yet well learned among the entire experience replay memory, and is not designed to extract a few core states.
. All data were tested in Cartpole, and scores were measured in 100 random episodes. The green triangle and the red bar indicate the mean and the median scores, respectively. Blank circles are outliers.
5.3 Implementation Detail
We set the maximum capacity of a BOOK to while writing the BOOK. To maintain the size of the BOOK, only the top % experiences are preserved and the remaining experiences are deleted to save new experiences when the capacity exceeds . As a linguistic rule, each dimension of the input state was quantized into levels. We set the discount factor for rewards to . Immediate reward was clipped from to at Q*bert and generalized with for the other 3 environments (Cartpole, Acrobot and Lunar Lander). The frequency limit was set to and the decay period was set to .
Our method adopted the same network architecture with A3C for Atari Q*bert. But for the other 3 environments, we replaced the convolution layers of A3C to one fully connected layer with 64 units followed by ReLU activation. Each environment was randomly initialized. For Q*bert, it skipped a maximum of 30 initial frames for random initialization as in(Bobrenko, 2016)
. We used 8 threads to train A3C network and instead of using shared RMSProp, ADAM(Kingma & Ba, 2014) optimizer was used. All the learning rates used in our experiments were set to . To write a BOOK, we trained only million steps (experiences) for Cartpole and Acrobot and million steps for Lunar Lander and Q*bert. After publishing a BOOK, we pre-trained a randomly initialized network for iterations with batch size , using only the contents in the published BOOK. It took less than a minute to learn a BOOK with 1 thread on Nvidia Titan X (Pascal) GPU and 4 CPU cores, for Q*bert.
In this paper, we have proposed a memory structure called BOOK that enables sharing knowledge among different deep RL agents. Experiments on multiple environments show that our method can achieve a high score by learning a small number of core experiences collected by each RL method. It is also shown that the knowledge contained in the BOOK can be effectively shared between different RL algorithms, which implies that the new RL agent does not have to repeat the same trial and error in the learning process and that the knowledge gained during learning can be kept in the form of a record.
As future works, we intend to apply our method to the environments with a continuous action space. Linguistic functions can also be defined in other ways, such as neural networks, for better clustering and feature representation.
- Abbeel et al. (2007) Abbeel, Pieter, Coates, Adam, Quigley, Morgan, and Ng, Andrew Y. An application of reinforcement learning to aerobatic helicopter flight. Advances in neural information processing systems, 19:1, 2007.
- Barto et al. (1983) Barto, Andrew G, Sutton, Richard S, and Anderson, Charles W. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE transactions on systems, man, and cybernetics, (5):834–846, 1983.
- Bellman (1957) Bellman, Richard. A markovian decision process. Technical report, DTIC Document, 1957.
- Bertsekas & Tsitsiklis (1995) Bertsekas, Dimitri P and Tsitsiklis, John N. Neuro-dynamic programming: an overview. In Decision and Control, 1995., Proceedings of the 34th IEEE Conference on, volume 1, pp. 560–564. IEEE, 1995.
- Bobrenko (2016) Bobrenko, Dmitry. Asynchronous deep reinforcement learning from pixels. 2016. URL http://busoniu.net/files/repository/readme-approxrl.html.
- Brockman et al. (2016) Brockman, Greg, Cheung, Vicki, Pettersson, Ludwig, Schneider, Jonas, Schulman, John, Tang, Jie, and Zaremba, Wojciech. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
- Catto (2011) Catto, E. Box2D: A 2D physics engine for games. 2011.
- Chang et al. (2015) Chang, Kai-Wei, He, He, Daumé III, Hal, and Langford, John. Learning to search for dependencies. arXiv preprint arXiv:1503.05615, 2015.
- Geramifard et al. (2015) Geramifard, Alborz, Dann, Christoph, Klein, Robert H, Dabney, William, and How, Jonathan P. Rlpy: a value-function-based reinforcement learning framework for education and research. Journal of Machine Learning Research, 16:1573–1578, 2015.
- Harmon et al. (1995) Harmon, Mance E, Baird III, Leemon C, and Klopf, A Harry. Advantage updating applied to a differential game. In Advances in neural information processing systems, pp. 353–360, 1995.
- Hasselt (2010) Hasselt, Hado V. Double q-learning. In Advances in Neural Information Processing Systems, pp. 2613–2621, 2010.
- Jaakkola et al. (1994) Jaakkola, Tommi, Jordan, Michael I, and Singh, Satinder P. On the convergence of stochastic iterative dynamic programming algorithms. Neural computation, 6(6):1185–1201, 1994.
- Kingma & Ba (2014) Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Krishnamurthy et al. (2015) Krishnamurthy, Akshay, EDU, CMU, Daumé III, Hal, and EDU, UMD. Learning to search better than your teacher. arXiv preprint arXiv:1502.02206, 2015.
- Langley (2000) Langley, P. Crafting papers on machine learning. In Langley, Pat (ed.), Proceedings of the 17th International Conference on Machine Learning (ICML 2000), pp. 1207–1216, Stanford, CA, 2000. Morgan Kaufmann.
- Lin (1992) Lin, Long-Ji. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine learning, 8(3-4):293–321, 1992.
- Lin (1993) Lin, Long-Ji. Reinforcement learning for robots using neural networks. PhD thesis, Fujitsu Laboratories Ltd, 1993.
- Mnih et al. (2013) Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David, Graves, Alex, Antonoglou, Ioannis, Wierstra, Daan, and Riedmiller, Martin. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
- Mnih et al. (2016) Mnih, Volodymyr, Badia, Adria Puigdomenech, Mirza, Mehdi, Graves, Alex, Lillicrap, Timothy, Harley, Tim, Silver, David, and Kavukcuoglu, Koray. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pp. 1928–1937, 2016.
- Peters et al. (2003) Peters, Jan, Vijayakumar, Sethu, and Schaal, Stefan. Reinforcement learning for humanoid robotics. In Proceedings of the third IEEE-RAS international conference on humanoid robots, pp. 1–20, 2003.
- Pritzel et al. (2017) Pritzel, Alexander, Uria, Benigno, Srinivasan, Sriram, Badia, Adrià Puigdomènech, Vinyals, Oriol, Hassabis, Demis, Wierstra, Daan, and Blundell, Charles. Neural episodic control. In International Conference on Machine Learning, pp. 2827–2836, 2017.
- Riedmiller (2005) Riedmiller, Martin. Neural fitted q iteration–first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pp. 317–328. Springer, 2005.
- Ross & Bagnell (2014) Ross, Stephane and Bagnell, J Andrew. Reinforcement and imitation learning via interactive no-regret learning. arXiv preprint arXiv:1406.5979, 2014.
- Rummery & Niranjan (1994) Rummery, Gavin A and Niranjan, Mahesan. On-line Q-learning using connectionist systems. University of Cambridge, Department of Engineering, 1994.
- Salimans et al. (2017) Salimans, Tim, Ho, Jonathan, Chen, Xi, and Sutskever, Ilya. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
- Schaul et al. (2015) Schaul, Tom, Quan, John, Antonoglou, Ioannis, and Silver, David. Prioritized experience replay. arXiv preprint arXiv:1511.05952, 2015.
- Schulman et al. (2015) Schulman, John, Levine, Sergey, Abbeel, Pieter, Jordan, Michael I, and Moritz, Philipp. Trust region policy optimization. In ICML, pp. 1889–1897, 2015.
- Silver et al. (2016) Silver, David, Huang, Aja, Maddison, Chris J, Guez, Arthur, Sifre, Laurent, Van Den Driessche, George, Schrittwieser, Julian, Antonoglou, Ioannis, Panneershelvam, Veda, Lanctot, Marc, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
- Sutton (1988) Sutton, Richard S. Learning to predict by the methods of temporal differences. Machine learning, 3(1):9–44, 1988.
- Sutton et al. (1999) Sutton, Richard S, McAllester, David A, Singh, Satinder P, Mansour, Yishay, et al. Policy gradient methods for reinforcement learning with function approximation. In NIPS, volume 99, pp. 1057–1063, 1999.
- Van Hasselt et al. (2016) Van Hasselt, Hado, Guez, Arthur, and Silver, David. Deep reinforcement learning with double q-learning. In AAAI, pp. 2094–2100, 2016.
- Wang et al. (2015) Wang, Ziyu, Schaul, Tom, Hessel, Matteo, van Hasselt, Hado, Lanctot, Marc, and de Freitas, Nando. Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581, 2015.
- Watkins & Dayan (1992) Watkins, Christopher JCH and Dayan, Peter. Q-learning. Machine learning, 8(3-4):279–292, 1992.
- Werbos (1992) Werbos, Paul J. Approximate dynamic programming for real-time control and neural modeling. Handbook of intelligent control, 1992.
- Williams (1987) Williams, Ronald J. A class of gradient-estimating algorithms for reinforcement learning in neural networks. In Proceedings of the IEEE First International Conference on Neural Networks, volume 2, pp. 601–608, 1987.