Table-based reinforcement learning (RL) techniques are easy to use but suffer from the classic curse of dimensionality, which means they are powerless to fix problems with high dimensional spaces. For this type of problem, we need RL methods that use function approximation, a technique commonly used in supervised learning (SL)Sutton and Barto (2018). The neural network is a popular function approximator in SL and many scholars have tried to use it in RL. The first success story of combining RL with a neural network is that of TD-Gammon, a computer program that plays backgammon Tesauro (1995). Despite this initial success, however, subsequent applications of neural network-based RL, or neural RL, to solve large-scale RL problems failed, which led to the abandonment of neural RL. Some consider TD-Gammon a one-time exception and believe the success of TD-Gammon was not due to the RL algorithm used, but because of the setup of co-evolutionary self-play biased by the dynamics of backgammon Pollack and Blair (1997).
The inability of neural networks to act as good function approximators in many RL studies is mainly attributed to the fact that samples generated in RL learning are correlated and not independently and identically distributed. Additionally, the neural network often forgets what it has learned, forcing it to re-learn when it sees a sample that has previously been presented to it, something known as the relearning problem Lin (1993).
There are two ways to overcome these shortcomings, by using experience replay Lin (1993) or by employing multiple agents working asynchronously to update a centralised network Mnih et al. (2016). Both have proven effective but have weaknesses as well. Experience replay with a large buffer can hurt performance and both a small and a large replay buffer can hurt the learning process Zhang and Sutton (2017). Asynchronous parallel agents are sample inefficient and require a complex application architecture Wang et al. (2016).
In this paper we propose discrete-to-deep supervised policy learning (D2D-SPL), a variant of supervised policy learning, to successfully train a neural network as a function approximator to solve high-dimensional RL problems. D2D-SPL is simple and based on the standard actor-critic with eligibility traces algorithm Barto et al. (1983). Our tests show that our method learns much faster than techniques that are based on experience replay and parallel agents.
2. Related Work
The first successful large-scale neural RL application is TD-Gammon Tesauro (1995), a computer program that plays backgammon. TD-Gammon learned a policy by playing against itself. To get to the level where it could beat another computer-based backgammon player, it self-played 300,000 games. To make it even more skilful to the point it could defeat a grandmaster, it self-played 1.5 million games. After this initial success, however, researchers attempting to exploit neural networks to solve large-scale RL problems were faced with repeated failures, as reported in Boyan and Moore (1995). This resulted in a move away from neural RL, before several techniques were proposed that led to future successes. Those techniques included methods based on experience replay Lin (1993), such as Neural Fitted Q Iteration (NFQ) Riedmiller (2005) and DQN Mnih et al. (2013), and methods based on asynchronous parallel agents such as asynchronous advantage actor-critic (A3C) Mnih et al. (2016) and actor-critic with experience replay (ACER) Wang et al. (2016).
Experience replay is an interesting invention because it inspires successful large-scale neural RL cases that came many years later. An experience is a tuple (, , , ), where is the state the agent is in while taking action , resulting in reward and a new state . Experience replay is a method whereby a collection of experiences are presented repeatedly to the learning algorithm as if the learning agent experienced it again and again. Two benefits of experience replay are the expedition of the process of credit assignment and the opportunity for the agent to refresh what it has learned Lin (1993).
NFQ is based on experience replay and uses a multi-layer perceptron (MLP). The principle idea of NFQ is to collect all experience tuples within an episode and present the collection at the end of the episode to train the MLP rather than after each timestep. Targets are generated using a cost function. This is repeated for a preset number of episodes and the experience collection at each episode includes all the collections from prior episodesRiedmiller (2005).
One notable success of experience replay is its use in deep Q-network (DQN) for playing ATARI games, where incoming samples are stored in a buffer and a subset of these samples are randomly selected from the buffer at every timestep to train the neural network Mnih et al. (2013).
The same team extends DQN by introducing a second network called the target network to better de-correlate samples. At the beginning of a learning episode, the primary network is cloned and the clone is used to produce targets for the primary network. Every timesteps the weights of the target network are updated using those of the primary network. This technique is known as DQN with a target network Mnih et al. (2015). Its success can be attributed to two innovations: 1. off-policy training with samples from a replay buffer to minimise correlations between samples; 2. The use of a target Q network to give consistent targets during temporal difference backups.
Other derivatives of DQN include a parallel implementation called Gorila Nair et al. (2015), double DQN van Hasselt et al. (2016), prioritised experience replay Schaul et al. (2015), dueling D-DQN Wang et al. (2015), distributed prioritised experience replay Horgan et al. (2018) and deep deterministic policy gradient (DDPG) Lillicrap et al. (2015), which covers continuous action spaces. Rainbow, another variant of DQN, combines features in other DQN variants and proves able to improve performance in some ATARI games Hessel et al. (2018). In addition, ACER Wang et al. (2016) improves A3C’s sample efficiency by using multiple agents and experience replay.
The main difference between our work and the DQN family of techniques is that our method does not use experience replay. Our method works with discrete action spaces, unlike DDPG that is suitable for continuous action spaces. Unlike NFQ, our method only trains the neural network once with data obtained when the actor-critic policy has somewhat stabilised.
Using data generated from a RL policy to train a neural network has also been used in studies called guided policy search Levine et al. (2015) and supervised policy learning Chebotar et al. (2016). Both studies are different from ours because they used model-based RL, whereas our RL algorithm, actor-critic, is model-free. Additionally, they used data from multiple policies to train a classifier whereas our technique only requires one policy. Furthermore, in Chebotar et al. (2016) an optimisation strategy is needed before data can be fed to the neural network. Our technique does not require optimisation between the RL part and the classifier.
A technique called supervised actor-critic Rosenstein and Barto (2002) and its variant Wang et al. (2018) also combine RL and SL. In this architecture, the actor receives a signal from the critic as well as from a neural network. By contrast, D2D-SPL is a two-step process that first uses standard actor-critic and then selects the data generated from the first step to train a classifier.
3. Discrete-to-Deep Supervised Policy Learning (D2D-SPL)
Combining RL and SL, D2D-SPL is suitable for solving continuous-state RL problems with discrete actions. It uses off-policy data from an actor-critic policy to train a neural network that thereafter can be used as a controller for an RL problem. D2D-SPL works in two phases, a RL phase and a SL phase. First, it discretises the continuous state space and learns a policy using actor-critic with eligibility traces. This policy, which is based on a coarse discretisation, should be able to be learned more quickly than a policy based on the full, continuous state-space. Second, it uses data generated during reinforcement learning to train a classifier. Not all samples are used. The method selects from each discrete state an input value and the action with the highest preference as an input/target pair. The classifier learns when all input/target pairs are presented to it at the same time, thus eliminating the need for online learning.
To use D2D-SPL, we start by discretising the state space into discrete states. The number of discrete states varies depending on the complexity of the problem. For example, our Cartpole solution can achieve its targets with 162 discrete states. By contrast, the aircraft manoeuvring simulation problem that we use to test our algorithm requires 14,000 states to learn a good policy. As will be seen shortly, the number of discrete states is also the maximum number of samples for the second-stage supervised learning. The number of continuous state variables is the same as the number of input nodes to the classifier.
Once discrete states are identified, we start learning by using actor-critic with eligibility traces. In every episode we group state variables by discrete state and store the total reward of the episode. Once learning is finished, we select the top 5% episodes having the highest total rewards and average the values of each state variable in each discrete state. We end up with samples as inputs to the neural network, where <= the number of discrete states. can be lower than the number of discrete states because we filter out discrete states that were never visited during reinforcement learning.
Algorithm 1 shows the reinforcement phase of our method. It is basically the actor-critic with eligibility traces algorithm Barto et al. (1983) with a buffer for storing tuples of state variable values and the number of times a discrete state has been visited. We then use the buffer and the resulting policy as inputs to the supervised_phase function in Algorithm 2, which shows how samples are selected and prepared for training the classifier.
Figure 1 shows how data is consolidated every episode. For simplicity it shows a case where there are only four discrete states (each represented by a box at the top diagram) and there are two state variables in each state. At every timestep, the values of the state variables of the visited state are recorded. Figure 1 shows the data collected after eight timesteps, in which Discrete State 1 has been visited three times with state variable tuples (1,1), (2,2) and (3,1), Discrete State 2 visited once with state variable tuple (7,7) and so on. Since we are only concerned with the average values of state variables in each discrete state, we can save memory by just keeping the totals of state variable values in every discrete state in and the number of times that state is visited in . Along with the total reward for the episode, we insert the tuple (, , ) to buffer . At the end of Algorithm 1, the number of tuples in buffer will be the same as the number of episodes run.
Buffer and the actor-critic policy are passed to function in Algorithm 2. The buffer contains the fore-mentioned tuples and the policy contains numerical preferences (one for each action) for each discrete state. The objective of this function is to produce an input/target pair for every discrete state, excluding states that were never visited during training. The function starts by selecting the top 5% tuples with the highest total rewards in the buffer and consolidate the selected tuples into at most one tuple for each discrete state. Data is consolidated by summing the values of each state variable in a discrete state and divided them by the number of times the state was visited. For each discrete state, we look up the policy and select as a target the action with the highest numerical preference. We remove all states that are not present in the input set from the target set and pass the training set to the classifier.
4. Experiments and Discussion
We test our method with two RL environments, the classic Cartpole and an aircraft manoeuvring simulator. We choose Cartpole because it is a well-known problem and the aircraft manoeuvring simulator because it represents a large-scale problem that is not easy to solve using table-based methods.
In each of the two problems, the agent learns an actor-critic policy for episodes. We call the policy it learns Discrete and use it as the base for several other solutions. In the first solution we clone the policy and use the clone as the base policy to run another episodes of actor-critic learning and call the resulting policy Discrete 2, because it is the result of learning for 2 episodes. In the second solution, which we call D2D-SPL, we use the data generated from Discrete to train a classifier.
Separately, for comparison, we use DQN Mnih et al. (2013), Double DQN (DDQN) van Hasselt et al. (2016) and A3C Mnih et al. (2016) to solve the same problems. A3C, despite it being four years old, is currently the state-of-the-art method. We use four parallel agents in all our A3C tests.
For Cartpole, we use eight systems to compare: Discrete 1,000, Discrete 2,000, D2D-SPL, DQN 1,000, DQN 2,000, DDQN 1,000, DDQN 2,000 and A3C 4,000. For Aircraft Manoeuvring we use Discrete 20,000, Discrete 40,000, D2D-SPL, DQN 20,000, DQN 40,000, DDQN 20,000, DDQN 40,000, A3C 20,000 and A3C 200,000. All experiments are run on an Intel i9-7900X (10 cores, 20 threads) machine with two Nvidia 1080 GTX cards. For all the methods involving neural networks, we use a relatively simple architecture containing a single hidden layer with twelve nodes in Cartpole and fifty nodes in Aircraft Manoeuvring.
Cartpole is a pole-balancing control problem posed in Michie and Chambers (1968) for which Barto et. al. proposed a solution using actor-critic reinforcement learning Barto et al. (1983). A solution aims to control the free-moving cart to which the pole is attached by exerting a force to the left or to the right of the cart to keep the pole standing. We use the OpenAI Gym Cartpole environment Brockman et al. (2016) that is based on Barto et. al.’s solution but is different from the original system in that the four state variables in Gym are randomised at the beginning of every episode, whereas in the original system the variables are always set to zero.
OpenAI Gym’s Cartpole comes in two flavours, one that considers the problem solved if the pole remains standing for 200 consecutive timesteps and another whose target is 500 timesteps. We make the problem more complex by raising the target to 100,000.
Our code is published on https://github.com/budi-kurniawan/d2d-spl.
The agent gets a +1 reward for each timestep, including when the agent lands on a terminal state. Each episode is terminated when one of these two things occurs: When the pole falls (failure) or when the target is achieved (success). Therefore, the time spent in each episode is proportional to the number of total rewards. The more successful learning in an episode is, the longer the episode takes.
For each method we run ten trials that each involves training the agent once, and then testing the final policy or model on 100 different runs from different starting positions. We use the same random seeds for all methods, making sure the initial values for all methods are the same for each trial. The average rewards for all runs are shown in Table 1. Table 2 shows how many tests in each trial achieve the target reward of 100,000.
Tables 1 and 2 show that D2D-SPL outperforms the Discrete 1,000 agent on which it is based, demonstrating that the neural network’s ability to generalise from the discrete policy is beneficial. D2D-SPL is also more effective than all the other methods, except DDQN. A3C, being sample-inefficient and learned in only 4,000 episodes, performs much worse than the other solutions.
4.2. Aircraft Manoeuvring Simulator
4.2.1. The Environment
The second environment we test our method with is an air combat simulator called Ace Zero, which was developed by Defence Science and Technology Group, Australia and used in Ramirez et al. (2018), Masek et al. (2018) and Lam et al. (2019). We set the simulator for one-on-one fights in two-dimensional space, representing a continuous space sequential-decision problem much larger than Cartpole. In this domain we aim to develop an agent that can learn to execute aerial manoeuvres for autonomous aircraft. The goal is for our pilot agent to learn to pursue another autonomous aircraft, that is itself manoeuvring. The basic manoeuvre that we are exploring is know as a “pure pursuit” manoeuvre and can be considered a basic building block for more complex aerial manoeuvres such as formation flying or even within visual range air to air combat. We would like our pilot agent not only to learn how to manoeuvre, but also to adapt to the manoeuvres of a dynamic opponent.
For our agent, the environment offer an action space with five discrete actions: do nothing, turn left by 10°, turn right by 10°, increase speed by 10% and decrease speed by 10%. The opposing agent, however, is allowed to perform continuous change of speed and direction within its physical limit. Our desired goal state is for our learning pilot to manoeuvre their aircraft to be in a specific relative geometry configuration with respect to the aircraft being pursued. To describe this geometry, we use the standard measurements such as the range, the attack angle (AA) and the antenna train angle (ATA). Figure 2 shows this geometry. By convention, blue is used to depict the aircraft controlled by the subject agent and red to represent the opposing aircraft. The angles are shown from the point of view of the blue aircraft.
The aircraft centres of mass are connected by the line of sight (LOS) line, which is also used to calculate the range between the two aircraft. The aspect angle (AA) is the angle between the LOS line and the tail of the red aircraft. It is an angular measure of how far the pursuer is of the pursued aircraft’s tail. The antenna train angle (ATA) is the angle between the nose of the blue aircraft and the LOS line. It is an angular measure of how far the pursued aircraft is off the aircraft nose of the pursuer. The value of the AA and ATA is within 0°and 180°. By convention, angles to the right side of the aircraft are considered positive and angles to the left negative McGrew et al. (2010).
The range, AA, ATA and the speed difference between the two aircraft are the four state variables making up each state.
The reward for each action taken is proportional to how favourable our agent’s position is relative to the opposing agent, which can be measured quantitively using the McGrew score McGrew et al. (2010). The score incorporates the range, AA and ATA. The McGrew score consists of two components, McGrew angular score () and McGrew range score ().
The McGrew angular score is defined as follows.
Here, and are in degrees and described in Figure 2. The maximum possible value for is 1, which is achieved when = = 0.
The McGrew range score is defined as this.
where is the current range of the two aircraft and the desired range. , the midpoint between the minimum gun range (500 feet = 153m) and the maximum gun range (3,000 feet = 914m), is about 380m Shaw (1985). The value of k, the hyper-parameter scaling factor, determines the width of the function peak around . The larger the value of , the bigger the spread. A small value of dictates that a high McGrew range score can only be achieved if the two aircraft are very close to the desired range. By default = 5.
The McGrew score ranges from 0.0 to 1.0 (inclusive). It approaches 1.0 when our agent is following the opponent within the desired range. By contrast, when it is being pursued by the opponent, the McGrew score is close to 0.
For all agents, we offset the McGrew score by -0.5 to make them learn faster as described in our previous study Kurniawan et al. (2019).
4.2.2. Test Results
For the discretised actor-critic solutions, we discretise the state space into 14,000 discrete states. The range is split into fourteen regions, the AA into ten regions, the ATA into ten regions, and the speed difference into ten regions, resulting in 14,000 states. For training for all solutions, we start the opponent (Red aircraft) from position (, , ) where and are a coordinate in a Cartesian coordinate and the flying direction (heading) in degrees (relative to the X axis). Our aircraft (Blue) always starts from the origin with heading 0°and an initial speed of 125m/s, which means it starts by flying along the X axis. Red always starts from (1500, 300, 50°), where is a small positive random number, and flies in a straight line at a constant speed of 125m/s. The initial positions, headings and speeds of both aircraft are the same for all episodes. All episodes are terminated after 700 timesteps.
We start by running the agent for 20,000 episodes, resulting in Discrete 20,000. This base policy is then cloned for the same agent to continue learning another 20,000 agents, resulting in Discrete 40,000. The base is also used for D2D-SPL. Separately, we use DQN, DDQN and A3C to produce DQN 20,000, DQN 40,000, DDQN 20,000, DDQN 40,000, A3C 20,000 and A3C 200,000. The choice for 200,000 episodes for the second A3C solution is due to the fact that A3C is data inefficient and need much more episodes to achieve comparable scores.
The policies from Discrete 20,000 and Discrete 40,000 as well as models from D2D-SPL, DQN, DDQN and A3C solutions are then used to test agents against an opponent that flies along paths that were not seen during training. Figures 3 to 6 show four test scenarios applied against the D2D-SPL models. The red paths represent the opponent’s trajectories and the blue ones our agent’s.
Each episode is terminated after 700 timesteps and Table 3 shows the learning times for all the solutions. All values are relative to the average of Discrete 20,000. Discrete 40,000 takes about twice the time taken by Discrete 20,000, DQN 40,000 runs in about twice the time taken by DQN 20,000, and DDQN 40,000 completes in twice the time taken by DDQN 20,000. A3C 200,000 runs ten times longer than A3C 20,000. Because A3C uses four concurrent agents, learning takes much faster than its DQN and DDQN rivals. Among all the solutions being compared, D2D-SPL learns the fastest as it only takes 0.01% longer than Discrete 20,000.
Figure 7 shows the average reward per episode for all the solutions. The graphs have been smoothed-out by replacing every 200 consecutive rewards with their mean. It shows that using data from Discrete 20,000 to train the D2D-SPL network results in an average score of 0.95 when the training set is re-applied to the resulting network. This score is much higher than the scores of the other methods.
It can also seen that DQN suffers from overestimation as reported in van Hasselt et al. (2016), which adversely affects the policy. Overestimation did not occur in DDQN.
Tables 4 to 7 show the results for the four test scenarios. In the discrete cases, Discrete 40,000 is generally better than Discrete 20,000 because the learning curve with actor-critic is more stable, not as noisy as the DQN or A3C. This means, longer learning tends to produce a better policy. In all cases, D2D-SPL performs better than its base Discrete 20,000 and even Discrete 40,000. It is also better than DQN 20,000 in three of four cases and better than DQN 40,000 in all cases. Because of the possible overestimation in DQN, there is no guarantee that longer learning (in this case, DQN 40,000) will produce a better model than shorter learning (in this case, DQN 20,000).
5. Conclusions and Future Work
We build and test solutions for Cartpole and an aircraft manoeuvring simulator. The difference among the solutions is the learning time and the performance of the generated policy or model. It is shown that actor-critic works for both problems and longer learning with more episodes tends to produce a better policy, even though, as shown in Figure 7, the result is unstable in the sense that the policy after episodes is not always better than the policy before that. This means, when we decide to stop learning after episode , we need to record policies before episode , test them against some pre-set criteria, and choose the best policy. For instance, if we decide to run a learning session for 20,000 episodes, we might want to compare all policies resulting from the last 500 episodes or so.
We also show that D2D-SPL can shorten the learning time of the actor-critic algorithm. The D2D-SPL results are consistently better than the base policy used to train it and even better than the policy obtained by resuming the actor-critic learning. In addition, because D2D-SPL gets its data from the top 5% of the episodes, the resulting model is stable. As D2D-SPL uses a neural-network, which generally is known to be a good function approximator, it is not surprising that D2D-SPL performs better than discrete actor-critic in generalising test cases not seen during learning.
In both Cartpole and Aircraft Manoeuvring, DQN, DDQN and A3C can be used to train a neural network. However, D2D-SPL learns much faster and performs better in Cartpole and in the majority of the test cases in Aircraft Manoeuvring than its competitors.
One difficulty in using D2D-SPL is to find a discretisation scheme that leads to a good policy. In the case of Cartpole, the discretisation scheme has been made available in Barto et al. (1983). In the case of Aircraft Manoeuvring it took us many experiments to come up with a good discretisation scheme. Generally, the more state variables there are, the more combinations there are that are possible and the harder to get it right. Future studies may focus on using D2D-SPL in environments with higher numbers of state variables.
Since this is the first time D2D-SPL is ever used, there are a number of areas of where further work with D2D-SPL can be undertaken. Future work may utilise other tabular RL algorithms, such as Q-learning and SARSA in the RL part of the system. It is also possible to further train the resulting network of D2D-SPL, using an existing or new method, to improve performance. Extending this work to apply to more complex domains which are inherently multi-objective is also of interest Roijers et al. (2013) Vamplew et al. (2011).
This research is supported by the Defence Science and Technology Group, Australia; the Defence Science Institute, Australia; and an Australian Government Research Training Program Fee-offset scholarship. Associate Professor Joarder Kamruzzaman of the Centre for Multimedia Computing, Communications, and Artificial Intelligence Research (MCCAIR) at Federation University contributed some of the computing resources for this project.
- Barto et al. (1983) A. Barto, R.S. Sutton, and C.W Anderson. 1983. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Transactions on System, Man, and Cybernetics (1983), 833–836.
- Boyan and Moore (1995) J.A. Boyan and A.W. Moore. 1995. Generalization in reinforcement learning: Safely approximating the value function. NIPS-7. San Mateo, CA: Morgan Kaufmann (1995).
- Brockman et al. (2016) G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J Schulman, J. Tang, and W. Zaremba. 2016. OpenAI Gym. (2016).
- Chebotar et al. (2016) Y. Chebotar, K. Hausman, O. Kroemer, G.S. Sukhatme, and S. Schaal. 2016. Generalizing regrasping with supervised policy learning. International Symposium on Experimental Robotics (2016).
- Hessel et al. (2018) M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver. 2018. Rainbow: Combining improvements in deepreinforcement learning. AAAI Conference on Artificial Intelligence (2018).
- Horgan et al. (2018) D. Horgan, J. Quan, D. Budden, G. Barth-Maron, M. Hessel, H. van Hasselt, and D. Silver. 2018. Distributed prioritized experience replay. ICLR (2018).
- Kurniawan et al. (2019) B. Kurniawan, P. Vamplew, M. Papasimeon, R. Dazeley, and C. Foale. 2019. An empirical study of reward structures for actor-critic reinforcement learning in air combat manoeuvring simulation. 32nd Australasian Joint Conference on Artificial Intelligence (2019).
- Lagoudakis and Parr (2003) M. G. Lagoudakis and R. Parr. 2003. Reinforcement learning as classification: leveraging modern classifiers. Proc. 20th Int. Conf. Mach. Learn. (2003), 424–431.
- Lam et al. (2019) C.P. Lam, M. Masek, L. Kelly, M. Papasimeon, and L. Benke. 2019. A simheuristic approach for evolving agent behaviour in the exploration for novel combat tactics. Operations Research Perspectives 6 (2019).
- Levine et al. (2015) S. Levine, N. Wagener, and P. Abbeel. 2015. Learning contact-rich manipulation skills with guided policy search. IEEE International Conference on Robotics and Automation (2015), 156–163.
- Lillicrap et al. (2015) T.P. Lillicrap, J.J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).
- Lin (1993) L.J. Lin. 1993. Reinforcement learning for robots using neural networks. Phd thesis, Carnegie Mellon University (1993).
- Masek et al. (2018) M. Masek, C.P. Lam, L. Benke, L. Kelly, and M. Papasimeon. 2018. Discovering emergent agent behaviour with evolutionary finite state machines. Int. Conf. on Principles and Practice of Multi-Agent Systems (2018).
- McGrew et al. (2010) J. McGrew, J.P. How, B. Williams, and N. Roy. 2010. Air-combat strategy using approximate dynamic programming. Journal of Guidance, Control, and Dynamics 33 (2010).
- Michie and Chambers (1968) D. Michie and R.A. Chambers. 1968. BOXES: An experiment in adaptive control. Machine Intelligence 2, E. Dale and D. Michie, Eds. Edinburgh: Oliver and Boyd 2 (1968), 137–152.
- Mnih et al. (2016) V. Mnih, A.P. Badia, M. Mirza, A. Graves, T. Harley, T.P. Lillicrap, D. Silver, and K. Kavukcuoglu. 2016. Asynchronous methods for deep reinforcement learning. Proc. 33rd Int. Conf. Mach. Learn. 48 (2016).
Mnih et al. (2013)
V. Mnih, K. Kavukcuoglu,
D. Silver, A. Graves, I.
Antonoglou, D. Wierstra, and M.
Playing Atari with deep reinforcement learning.
NIPS Deep Learning Workshop(2013).
- Mnih et al. (2015) V. Mnih, K. Kavukcuoglu, D. Silver, A.A. Rusu, J. Veness, M.G. Bellemare, A. Graves, M. Riedmiller, A.K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. 2015. Human-level control through deep reinforcement learning. Nature (Feb. 2015), 29–33.
- Nair et al. (2015) A. Nair, P. Srinivasan, S. Blackwell, C. Alcicek, R. Fearon, A. DeMaria, V. Panneershelvam, M. Suleyman, C. Beattie, S. Petersen, S. Legg, V. Mnih, K. Kavukcuoglu, and D. Silver. 2015. Massively parallel methods for deep reinforcement learning. ICML Deep Learning Workshop (2015).
- Pollack and Blair (1997) J.B. Pollack and A.D. Blair. 1997. Why did TD-Gammon work? Advances in Neural Information Processing Systems (1997), 10–16.
- Ramirez et al. (2018) M. Ramirez, M. Papasimeon, N. Lipovetzky, L. Benke, T. Miller, A.R. Pearce, E. Scala, and M. Zamani. 2018. Integrated hybrid planning and programmed control for real time UAV maneuvering. Proc. 17th International Conference on Autonomous Agents and MultiAgent Systems (2018), 1318–1326.
- Riedmiller (2005) M. Riedmiller. 2005. Neural fitted Q iteration – first experiences with a data efficient neural reinforcement learning method. Machine Learning: ECML 2005 (2005).
- Roijers et al. (2013) D.M. Roijers, P. Vamplew, S. Whiteson, and R. Dazeley. 2013. A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research 48 (2013).
- Rosenstein and Barto (2002) M.T. Rosenstein and A. G. Barto. 2002. Supervised learning combined with an actor critic architecture. Technical report, Amherst, MA, USA (2002).
- Schaul et al. (2015) T. Schaul, J. Quan, I. Antonoglou, and D. Silver. 2015. Prioritized experience replay. arXiv preprint arXiv:1511.05952 (2015).
- Shaw (1985) R.L. Shaw. 1985. Fighter Combat: Tactics and Maneuvering.
- Sutton and Barto (2018) R. Sutton and A. Barto. 2018. Reinforcement Learning: An Introduction: Second Edition. MIT Press.
- Tesauro (1995) G. Tesauro. 1995. TD-Gammon: A self-teaching backgammon program. Applications of Neural Networks (1995).
- Vamplew et al. (2011) P. Vamplew, R. Dazeley, A. Berry, R. Issabekov, and E. Dekker. 2011. Empirical evaluation methods for multiobjective reinforcement learning algorithms. Machine Learning 84, 51 (2011).
- van Hasselt et al. (2016) H. van Hasselt, A. Guez, and D. Siver. 2016. Deep reinforcement learning with double Q-learning. Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (2016).
et al. (2018)
L. Wang, W. Zhang,
He. X., and H. Zha.
Supervised reinforcement learning with recurrent neural Network for dynamic treatment recommendation.International Conference on Knowledge Discovery and Data Mining (2018), 2447–2456.
- Wang et al. (2016) Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. 2016. Sample efficient actor-critic with experience replay. arXiv preprint arXiv:1611.01224 (2016).
- Wang et al. (2015) Z. Wang, N. de Freitas, and M. Lanctot. 2015. Dueling network architectures for deep reinforcement learning. ArXiv e-prints (2015).
- Zhang and Sutton (2017) S. Zhang and R. S. Sutton. 2017. A deeper look at experience replay. CoRR, abs/1712.01275 (2017).