1 Introduction
In recent years, the field of deep reinforcement learning (DRL) has witnessed striking empirical achievements in complicated sequential decision making problems once believed unsolvable. One active area of the application of DRL methods is to design artificial intelligence (AI) for games. The success of DRL in the Go game
(Silver et al., 2016) provides a promising methodology for game AI. In addition to the Go game, DRL has been widely used in other games such as Atari (Mnih et al., 2015), Robot Soccer (Hausknecht and Stone, 2016; Masson et al., 2016), and Torcs (Lillicrap et al., 2016) to achieve superhuman performances.However, most existing DRL methods require the action space to be either finite and discrete (e.g., Go and Atari) or continuous (e.g. MuJoCo and Torcs). For example, the algorithms for discrete action space include deep Qnetwork (DQN) (Mnih et al., 2013), Double DQN (Hasselt et al., 2016), A3C (Mnih et al., 2016); the algorithms for continuous action space include deterministic policy gradients (DPG) (Silver et al., 2014) and its deep version DDPG (Lillicrap et al., 2016).
Motivated by the applications in Real Time Strategic (RTS) games, we consider the reinforcement learning problem with a discretecontinuous hybrid action space. Different from completely discrete or continuous actions that are widely studied in the existing literature, in our setting, the action is defined by the following hierarchical structure. We first choose a high level action from a discrete set (we denote by for short); upon choosing , we further choose a low level parameter which is associated with the th high level action. Here is a continuous set for all .^{1}^{1}1The low level continuous parameter could be optional. And different discrete actions can share some common low level continues parameters. It would not affect any results or derivation in this paper. Therefore, we focus on a discretecontinuous hybrid action space
(1) 
To apply existing DRL approaches on this hybrid action space, two straightforward ideas are possible:

Approximate by an finite discrete set. We could approximate each by a discrete subset, which, however, might lose the natural structure of . Moreover, when is a region in the Euclidean space, establishing a good approximation usually requires a huge number discrete actions.

Relax into a continuous set. To apply existing DRL framework with continuous action spaces, Hausknecht and Stone (2016) define the following approximate space
(2) where . Here is used to select the discrete action either deterministically (by picking
) or randomly (with probability
). Compared with the original action space , might significantly increase the complexity of the action space.
In this paper, we propose a novel DRL framework, namely parametrized deep Qnetwork learning (PDQN), which directly works on the discretecontinuous hybrid action space without approximation or relaxation. Our method can be viewed as an extension of the famous DQN algorithm to hybrid action spaces. Similar to deterministic policy gradient methods, to handle the continuous parameters within actions, we first define a deterministic function which maps the state and each discrete action to its corresponding continuous parameter. Then we define an actionvalue function which maps the state and finite hybrid actions to real values, where the continuous parameters are obtained from the deterministic function in the first step. With the merits of both DQN and DDPG, we expect our algorithm to find the optimal discrete action as well as avoid exhaustive search over continuous action parameters.
To evaluate the empirical performances, we apply our algorithm to several environments. Empirical study indicates that PDQN is more efficient and robust than Hausknecht and Stone (2016)’s method that relaxes into a continuous set and applies DDPG.
2 Background
In reinforcement learning, the environment is usually modeled by a Markov decision process (MDP)
, where is the state space, is the action space,is the Markov transition probability distribution,
is the probability distribution of the initial state, is the reward function, and is the discount factor. An agent interacts with the MDP sequentially as follows. At the th step, suppose the MDP is at state and the agent selects an action , then the agent observes an immediate reward and the next state . A stochastic policy maps each state to a probability distribution over , that is, is defined as the probability of selecting action at state . Whereas a deterministic policy maps each state to a particular action in . Let be the cumulative discounted reward starting from timestep . We define the statevalue function and the actionvalue function of policy as and , respectively. Moreover, we define the optimal state and actionvalue functions as and , respectively, where the supremum is taken over all possible policies. The goal of the agent is to find a policy the maximizes the expected total discounted reward, which can be achieved by estimating
.2.1 Reinforcement Learning Methods for Finite Action Space
Broadly speaking, reinforcement learning algorithms can be categorized into two classes: valuebased methods and policybased methods. Valuebased methods first estimate and then output the greedy policy with respect to that estimate. Whereas policybased methods directly optimizes as a functional of .
The Qlearning algorithm (Watkins and Dayan, 1992) is based on the Bellman equation
(3) 
which has as the unique solution. During training, the function is updated iteratively over the the transition sample in a Monte Carlo way. For finite state space , values can be stored in a table. However, when is too large to fit in computer memory, function approximation for has to be applied. Deep QNetwork (DQN) (Mnih et al., 2013, 2015) approximates
using a neural network
, where is the network weights. In theth iteration, the DQN updates the weights using the gradient of the least squares loss function
(4) 
A variety of extensions are proposed to improve over DQN, including Double DQN (Hasselt et al., 2016), dueling DQN (Wang et al., 2016), bootstrap DQN (Osband et al., 2016), asynchronous DQN (Mnih et al., 2016), averagedDQN Anschel et al. (2017) and prioritized experience replay (Schaul et al., 2016).
In addition to the valuebased methods, the policybased methods directly model the optimal policy. The objective of policybased methods is to find a policy that maximizes the expected reward of a stochastic policy parametrized by .
The policy gradient methods aims at finding a weight that maximizes via gradient descent. The stochastic policy gradient theorem (Sutton et al., 2000) states that
(5) 
The REINFORCE algorithm (Williams, 1992) updates using . Moreover, the actorcritic methods, (Konda and Tsitsiklis, 2000) and vanilla A3C (Mnih et al., 2016) use neural network or to estimate the value function associated to policy . This algorithm combines the valuebased and policybased perspectives together, and is recently used to achieve superhuman performance in the game of Go Silver et al. (2017).
2.2 Reinforcement Learning Methods for Continuous Action Space
When the action space is continuous, valuebased methods will no longer be computationally tractable because of taking maximum over the action space in (4), which in general cannot be computed efficiently. The reason is that the neural network is nonconvex when viewed as a function of ; is the global minima of a nonconvex function, which is NPhard to obtain in the worst case.
To address this issue, the continuous Qlearning (Gu et al., 2016) approximates action value function by neural networks
where is further parameterized as a quadratic function w.r.t . So the maximization over has analytic solution.
Moreover, it is also possible to adapt policybased methods to continuous action spaces by considering deterministic policies . Similar to (5), the deterministic policy gradient (DPG) theorem (Silver et al., 2014) states that
(6) 
Furthermore, this deterministic version of the policy gradient theorem can be viewed as the limit of (5
) with the variance of
going to zero. Based on (6), the DPG algorithm (Silver et al., 2014) and the DDPG algorithm (Lillicrap et al., 2016) are proposed. A related line of work is policy optimization methods, which improve the policy gradient method using novel optimization techniques. These methods include natural gradient descent (Kakade, 2002), trust region optimization (Schulman et al., 2015), proximal gradient descent (Schulman et al., 2017), mirror descent (Montgomery and Levine, 2016), and entropy regularization (O’Donoghue et al., 2017).2.3 Reinforcement Learning Methods for Hybrid Action Space
A related body of literature is the recent work on reinforcement learning with a structured action space, which contains finite actions each parametrized by a continuous parameter.
To handle such parametrized actions, Hausknecht and Stone (2016) applies the DDPG algorithm on the relaxed action space (2) directly. A More reasonable approach is to update the discrete action and continuous action separately with two different methods.
Masson et al. (2016) propose a learning framework that alternately updates the network weights for discrete actions using Qlearning (Sarsa) and for continuous parameters using policy search (eNAC). Similarly, Khamassi et al. (2017) uses Qlearning for discrete actions and policy gradient for continuous parameters. These two methods both need to assume a distribution of continuous parameters and are both onpolicy.
3 Parametrized Deep QNetworks (PDQN)
This section introduces the proposed framework to handle the application with hybrid discretecontinuous action space. We consider an MDP with a parametrized action space defined in (1). For , we denote the action value function by where , , and . Let be the discrete action selected at time and let be the associated continuous parameter. Then the Bellman equation becomes
(7) 
Here inside the conditional expectation on the righthand side of (7), we first solve for each , and then take the largest . Note that taking supremum over continuous space is computationally intractable. However, the righthand side of (7) can be evaluated efficiently providing is given.
To elaborate this idea, first note that, when the function is fixed, for any and , we can view as a function Then we can rewrite the Bellman equation in (7) as
Note that this new Bellman equation resembles the classical Bellman equation in (3) with . Similar to the deep Qnetworks, we use a deep neural network to approximate , where denotes the network weights. Moreover, for such a , we approximate with a deterministic policy network , where denotes the network weights of the policy network. That is, when is fixed, we want to find such that
(8) 
Then similar to DQN, we could estimate by minimizing the meansquared Bellman error via gradient descent. In specific, in the th step, let and be the weights of the value network and the deterministic policy network, respectively. To incorporate multistep algorithms, for a fixed , we define the step target by
(9) 
We use the least squares loss function for like DQN. Moreover, since we aim to find that maximize with fixed, we use the loss function for as following
(10) 
By (10) we update the weights using stochastic gradient methods. In the ideal case, we would minimize the loss function in (10) when is fixed. From the results in stochastic approximation methods (Kushner and Yin, 2006), we could approximately achieve such a goal in an online fashion via a twotimescale update rule (Borkar, 1997). In specific, we update with a stepsize that is asymptotically negligible compared with the stepsize for . In addition, for the validity of stochastic approximation, we require to satisfy the RobbinsMonro condition (Robbins and Monro, 1951). We present the PDQN algorithm with experienced replay in Algorithm 1.
Note that this algorithm requires a distribution defined on the action space for exploration. In practice, if each is a compact set in the Euclidean space (as in our case),
could be defined as the uniform distribution over
. In addition, as in the DDPG algorithm (Lillicrap et al., 2016), we can also add additive noise to the continuous part of the actions for exploration. Moreover, we use experience replay (Mnih et al., 2013) to reduce the dependencies among the samples, which can be replaced by more sampleefficient methods such as prioritized replay (Schaul et al., 2016).Moreover, we note that our PDQN algorithm can easily incorporate asynchronous gradient descent to speed up the training process. Similar to the asynchronous step DQN Mnih et al. (2016), we consider a centralized distributed training framework where each process computes its local gradient and communicates with a global “parameter server”. In specific, each local process runs an independent game environment to generate transition trajectories and uses its own transitions to compute gradients with respect to and . These local gradients are then aggregated across multiple processes to update the global parameters. Aggregating independent stochastic gradient decreases the variance of gradient estimation, which yields better algorithmic stability. We present the asynchronous step PDQN algorithm in Algorithm 2 in Appendix. For simplicity, here we only describe the algorithm for each local process, which fetches and from the parameter server and computes the gradient. The parameter server stores the global parameters , , and update them using the gradients sent from the local processes.
The key differences between the methods in 2.3 and PDQN are as follows.

In Hausknecht and Stone (2016), the discrete action types are parametrized as some continuous values, say . And the discrete action that is actually executed is chosen via or randomly with probability . Such a trick actually relaxes the hybrid action space into a continuous action space, upon which the classical DDPG algorithm can be applied. However, in our framework, the discrete action type is chosen directly by maximizing the action’s value explicitly as illustrated in Figure 1.

Masson et al. (2016) and Khamassi et al. (2017) use onpolicy update algorithm for continuous parameters. The network in Hausknecht and Stone (2016) is also an onpolicy actionvalue function estimator of current policy () if the discrete action is chosen via . While PDQN is an offpolicy algorithm.

Note that PDQN can use human players’ data, while it is hard to use human players’ data in Hausknecht and Stone (2016) because there is only discrete action without parameters .
(a) Network of PDQN  (b) Network of DDPG 
3.1 The Asynchronous step PDQN Algorithm
Similar to the asynchronous step DQN in Mnih et al. (2016), we can use asynchronous step PDQN algorithm to speed up the training process. We present the asynchronous step PDQN algorithm in Algorithm 2. Notice when , step DQN or step PDQN is no longer an offpolicy algorithm. However step bootstrap tactic can improve the convergence speed for delayedreward or longepisode reinforcement learning problem.
4 Experiments
We validate the proposed PDQN algorithm in 1) a simulation example, 2) scoring a goal in simulated RoboCup soccer and 3) the solo mode in game KOG.
To evaluate the performance, we compared our algorithm with Hausknecht and Stone (2016) and DQN under fair condition for all three scenarios. Hausknecht and Stone (2016) seems the only offpolicy method we are aware that solves the hybrid action space problem with deterministic policy, which can be estimated more efficiently compared with stochastic policies. DQN with discrete action approximation is also compared in the simulation example. In DQN and PDQN, we use a dueling layer to replace the last fullyconnected layer to accelerate training.
4.1 A Simulation Example
Suppose there is a squared plate in the size . The goal is to “pull” a unit point mass into a small target circle with radius . In each unit time , a unit force , i.e. , with constant direction can be applied to the point mass or a soft “brake” can be used to reduce the velocity of point mass by immediately. The effect of force follows the Newton mechanics and the plate is frictionless.
Let the coordinate of point mass and the target circle center be
, respectively. The state is represented as an 8dim vector
. The action space is and the reward is given by . The episode begins with random , and terminates if the point mass stops in the circle or runs out the square plate or the episode length exceeds 200.To deal with the periodic problem of the direction of movement, we use to represent all the direction and learn a normalized twodimensional vector instead of a degree (in practice, we add a normalize layer at the end to ensure this). The following two experiment also use this transformation.
(a) Episode reward vs. Iteration in training  (b) Mean episode reward vs. Iteration in test 
We compare the proposed PDQN with DDPG architecture using the same network hidden layer size. We also compared with DQN in discrete action space with direction “pull” and “brake” for completeness. We independently train 5 models for each method, and evaluate the performance during the training process with 100 trials’ average. Figure 2, Figure 4 is the learning curves for PDQN. Figure 3 shows the evaluated performance in respect of mean reward, mean goal percent and mean episode length. PDQN obviously converges much faster and more stable than its precedent work in our setting. DQN converges quickly but to a suboptimal solution and suffers from high variance because of discretization of “pull" direction. A demonstration of learned policy of PDQN can be found at goo.gl/XbdqHV.
4.2 Hfo
The Half Field Offense domain is an abstraction of full RoboCup 2D game. We use the same experiment settings with Hausknecht and Stone (2016), scoring goals without goalie. So we just simply summary the settings here and refer the reader to Hausknecht and Stone (2016) for details.
The state of HFO example is a 58 continuouslyvalued features derived through HeliosAgent2D’s (Akiyama, 2010) world model. It provides the relative position of several important objects such as the ball, the goal and other landmarks. A full list of state features may be found at https://github.com/mhauskn/HFO/blob/master/doc/manual.pdf.
The full action space for HFO is: {Dash(power, direction), Turn(direction), Kick(power, direction)}, where all the directions are parameterized in the range of degree and power in . Note moving forward is faster than sideways or backwards so turn the direction before moving is crucial for fast goal.
We also use the same handcrafted intensive reward:
where and are the distance between the ball and the agent or the center of goal respectively. is an additional reward for the agent at the first time it is close enough to kick the ball. is the final reward for a success goal.
(a) Episode length in training  (b) Episode reward sum in training 
Scoring Percent  Avg. Step to Goal  Scoring Percent  Avg. Step to Goal  
Helios’  .962  72.0  PDQN  .997  78.1 
Champion  
SARSA  .81  70.7  PDQN  .997  78.1 
DDPG  1  108.0  PDQN  .996  78.1 
DDPG  .99  107.1  PDQN  .994  81.5 
DDPG  .98  104.8  PDQN  .992  78.7 
DDPG  .96  112.3  PDQN  .991  79.9 
DDPG  .94  119.1  PDQN  .985  82.2 
DDPG  .84  113.2  PDQN  .984  87.9 
DDPG  .80  118.2  PDQN  .979  78.5 
To accelerate training, we use the asynchronous version of Algorithm 1 with 24 workers. Figure 4 shows the learning curve of PDQN for HFO scenario.
Additionally, we independently trained another 8 PDQN agents and compared the performance with the baseline results in Hausknecht and Stone (2016). The result is shown in Table 1. We can see PDQN can score more accurate and faster than DDPG with more stable performance. The training of PDQN agent costs about 1 hour on 2 Intel Xeon CPU E52670 v3. In comparison, it takes three days on a NVidia TitanX GPU to train a DDPG agent in Hausknecht and Stone (2016). The performance video for PDQN agent can be found at https://youtu.be/fwJGRQJ9TE.
4.3 Solo mode of King of Glory
The game King of Glory is the most popular mobile MOBA game in China with more than 80 million daily active players and 200 million monthly active players, as reported in July 2017. Each player controls one hero, and the goal is to destroy the base of the opposing team. In our experiments, we focus on the oneversusone mode, which is called solo, with both sides being the hero Lu Ban, a hunter type hero with a large attack range. We play against the internal AI shipped with the game.
In our experiment, the state of the game is represented by a 179dimensional feature vector which is manually constructed using the output from the game engine. These features consist of two parts. The first part is the basic attributes of the units and the second component of the features is the relative positions of other units with respect to the hero controlled by the player as well as the attacking relations between units. We note that these features are directly extracted from the game engine without sophisticated feature engineering. We conjecture that the overall performances could be improved with a more careful engineered set of features.
We simplify the actions of a hero into discrete action types: Move, Attack, UseSkill1, UseSkill2, UseSkill3, and Retreat. Some of the actions may have additional continuous parameters to specify the precise behavior. For example, when the action type is , the direction of movement is given by the parameter , where . Recall that each hero’s skills are unique. For Lu Ban, the first skill is to throw a grenade at some specified location, the second skill is to launch a missile in a particular direction, and the last skill is to call an airship to fly in a specified direction. A complete list of actions as well as the associated parameters are given in Table 2.
In KOG, the 6 discrete actions are not always usable, due to skills level up, lack of Magic Point (MP), or skills Cool Down(CD). In order to deal with this problem, we replace the with when selecting the action to perform, and calculating multistep target as in Equation 9.
ActionType  Parameter  Description 
Move  Move in the direction  
Attack    Attack default target 
UseSkill1  Skill 1 at the position  
UseSkill2  Skill 2 in the direction  
UseSkill3  Skill 3 in the direction  
Retreat    Retreat back to our base 
4.4 Reward for KoG
To encourage winning the game, we adopt reward shaping, where the immediate reward takes into account Gold earned, Hero HP, kill/death, etc. Specifically, we define a variety of statistics as follows. (In the sequel, we use subscript to represent the attributes of our side and to represent those of the opponent.)

Gold difference . This statistic measures the difference of gold gained from killing hero, soldiers and destroying towers of the opposing team. The gold can be used to buy weapons and armors, which enhance the offending and defending attributes of the hero. Using this value as the reward encourages the hero to gain more gold.

Health Point difference (): This statistic measures the difference of Health Point of the two competing heroes. A hero with higher Health Point can bear more severe damages while hero with lower Health Point is more likely to be killed. Using this value as the reward encourages the hero to avoid attacks and last longer before being killed by the enemy.

Kill/Death . This statistic measures the historical performance of the two heroes. If a hero is killed multiple times, it is usually considered more likely to lose the game. Using this value as the reward can encourage the hero to kill the opponent and avoid death.

Tower/Base HP difference
, . These two statistics measures the health difference of the towers and bases of the two teams. Incorporating these two statistic in the reward encourages our hero to attack towers of the opposing team and defend its own towers. 
Tower Destroyed . This counts the number of destroyed towers, which rewards the hero when it successfully destroy the opponent’s towers.

Winning Game . This value indicates the winning or losing of the game.

Moving forward reward: , where is the coordinate of : This value is used as part of the reward to guide our hero to move forward and compete actively in the battle field.
The overall reward is calculated as a weighted sum of the time differentiated statistics defined above. In specific, the exact formula is
The coefficients are set roughly inversely proportional to the scale of each statistic. We note that our algorithm is not very sensitive to the change of these coefficients in a reasonable range.
We use Algorithm 2 with 48 parallel workers and frame skipping. The training and validating performances are plotted in Figure 5.
(a1) Episode length in training  (b1) Episode length in training 
(a2) Episode reward sum in training  (b2) Episode reward sum in training 
(a3) Episode reward sum in validation  (b3) Episode reward sum in validation 
From the experimental results in Figure 5, we can see that our algorithm PDQN can learn the value network and the policy network much faster comparing to Hausknecht and Stone (2016). In (a1), we see that the average length of games increases at first, reaches its peak when the two players’ strength are close, and decreases when our player can easily defeat the opponent. In addition, in (a2) and (a3), we see that the total rewards in an episode increase consistently in training as well as in test settings.
5 Conclusion
Previous deep reinforcement learning algorithms mostly can work with either discrete or continuous action space. In this work, we consider the scenario with discretecontinuous hybrid action space. In contrast of existing approaches of approximating the hybrid space by a discrete set or relaxing it into a continuous set, we propose the parameterized deep Qnetwork (PDQN), which extends the classical DQN with deterministic policy for the continuous part of actions. Several empirical experiments with comparison of other baselines demonstrate the efficiency and effectiveness of PDQN.
Appendix A Appendix
a.1 More information on King of Glory
The game King of Glory is a MOBA game, which is a special form of the RTS game where the players are divided into two opposing teams fighting against each other. Each team has a team base located in either the bottomleft or the topright corner which are guarded by three towers on each of the three lanes. The towers can attack the enemies when they are within its attack range. Each player controls one hero, which is a powerful unit that is able to move, kill, perform skills, and purchase equipments. The goal of the heroes is to destroy the base of the opposing team. In addition, for both teams, there are computercontrolled units spawned periodically that march towards the opposing base in all the three lanes. These units can attack the enemies but cannot perform skills or purchase equipments. An illustration of the map is in Figure 6(a), where the blue or red circles on each lane are the towers.
During game play, the heroes advance their levels and obtain gold by killing units and destroying the towers. With gold, the heros are able to purchase equipments such as weapons and armors to enhance their power. In addition, by upgrading to the new level, a hero is able to improve its unique skills. Whereas when a hero is killed by the enemy, it will wait for some time to reborn.
In this game, each team contains one, three, or five players. The fiveversusfive model is the most complicated mode which requires strategic collaboration among the five players. In contrast, the oneversusone mode, which is called solo, only depends on the player’s control of a single hero. In a solo game, only the middle lane is active; both the two players move along the middle lane to fight against each other. The map and a screenshot of a solo game are given in Figure 6(b) and (c), respectively. In our experiments, we play focus on the solo mode. We emphasize that a typical solo game lasts about 10 to 20 minutes where each player must make instantaneous decisions. Moreover, the players have to make different types of actions including attack, move and purchasing
. Thus, as a reinforcement learning problem, it has four main difficulties: first, the state space has huge capacity; second, since there are various kinds of actions, the action space is complicated; third, the reward function is not well defined; and fourth, heuristic search algorithms are not feasible since the game is in realtime. Therefore, although we consider the simplest mode of
King of Glory, it is still a challenging game for artificial intelligence.(a) The map of a  (b) The battle field for  (c) A screen shot of 
MOBA game  a solo game  a solo game 
a.2 Parameter setting
Simulation: In Simulation example, we use with replay memory size in Algorithm 1. The network
is in size of 6432 nodes in each hidden layer, with the Relu activation function and
is in size of 643232. A uniform sample distribution is used in greedy and is annealed from to over first iterations and stay constant. The learning rate is annealed from to .HFO: In HFO example, we use and replay memory size for each worker, and the network is in the size of 25612864 nodes in each hidden layer, and is in size of 2561286464. A uniform sample distribution is used in greedy and is annealed from to over first iterations and stay constant. The learning rate is annealed from to .Additionally, Hausknecht and Stone (2016) suggest to use Inverting Gradients to enforce the bounded continues parameters to stay in the value range. Instead of using complicated Inverting Gradients technique, we just add a square loss penalty on the outofrange part.
KOG: The network or are both in the size of 25612864 nodes in each hidden layer, and is in size of 2561286464. To reduce the long episode length, we set the frame skipping parameter to 2. This means that we take actions every 3 frames or equivalently, 0.2 second. Furthermore, we set (4 seconds) in Algorithm 2 to alleviate the delayed reward. In order to encourage exploration, we use greedy sampling in training with . In specific, the first 5 type actions are sampled with probability of each and the action “Retreat” with probability . Moreover, if the sampled action is infeasible, we execute the greedy policy from the feasible ones, so the effective exploration rate is less than . The learning rate in training is fixed at 0.001 in training.
References
 Akiyama (2010) H. Akiyama. Agent2d base code. 2010.

Anschel et al. (2017)
O. Anschel, N. Baram, and N. Shimkin.
Averageddqn: Variance reduction and stabilization for deep
reinforcement learning.
In
Proceedings of the 34th International Conference on Machine Learning (ICML17)
, pages 176–185, 2017.  Borkar (1997) V. S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters, 29(5):291–294, 1997.
 Gu et al. (2016) S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep learning with modelbased acceleration. In Proceedings of the 33rd International Conference on Machine Learning (ICML16), pages 2829–2838, 2016.
 Hasselt et al. (2016) H. v. Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double qlearning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2094–2100. AAAI Press, 2016.
 Hausknecht and Stone (2016) M. Hausknecht and P. Stone. Deep reinforcement learning in parameterized action space. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
 Hinton et al. (2012) G. Hinton, N. Srivastava, and K. Swersky. Neural networks for machine learninglecture 6aoverview of minibatch gradient descent, 2012.
 Kakade (2002) S. M. Kakade. A natural policy gradient. In Advances in neural information processing systems, pages 1531–1538, 2002.
 Khamassi et al. (2017) M. Khamassi, G. Velentzas, T. Tsitsimis, and C. Tzafestas. Active exploration and parameterized reinforcement learning applied to a simulated humanrobot interaction task. In Robotic Computing (IRC), IEEE International Conference on, pages 28–35. IEEE, 2017.
 Konda and Tsitsiklis (2000) V. R. Konda and J. N. Tsitsiklis. Actorcritic algorithms. In Advances in neural information processing systems, pages 1008–1014, 2000.
 Kushner and Yin (2006) H. Kushner and G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Stochastic Modelling and Applied Probability. Springer New York, 2006. ISBN 9780387217697.
 Lillicrap et al. (2016) T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
 Masson et al. (2016) W. Masson, P. Ranchod, and G. Konidaris. Reinforcement learning with parameterized actions. In AAAI, pages 1934–1940, 2016.
 Mnih et al. (2013) V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
 Mnih et al. (2015) V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Humanlevel control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
 Mnih et al. (2016) V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML16), pages 1928–1937, 2016.
 Montgomery and Levine (2016) W. Montgomery and S. Levine. Guided policy search as approximate mirror descent. arXiv preprint arXiv:1607.04614, 2016.
 O’Donoghue et al. (2017) B. O’Donoghue, R. Munos, K. Kavukcuoglu, and V. Mnih. Pgq: Combining policy gradient and learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
 Osband et al. (2016) I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped dqn. In Advances in Neural Information Processing Systems, pages 4026–4034, 2016.
 Robbins and Monro (1951) H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951.
 Schaul et al. (2016) T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
 Schulman et al. (2015) J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML15), pages 1889–1897, 2015.
 Schulman et al. (2017) J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
 Silver et al. (2014) D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML14), pages 387–395, 2014.
 Silver et al. (2016) D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
 Silver et al. (2017) D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis. Mastering the game of go without human knowledge. Nature, 550(7676):354–359, 2017.
 Sutton et al. (2000) R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
 Wang et al. (2016) Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML16), pages 1995–2003, 2016.
 Watkins and Dayan (1992) C. J. Watkins and P. Dayan. Qlearning. Machine learning, 8(34):279–292, 1992.
 Williams (1992) R. J. Williams. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning, 8(34):229–256, 1992.
Comments
There are no comments yet.