Parametrized Deep Q-Networks Learning: Reinforcement Learning with Discrete-Continuous Hybrid Action Space

by   Jiechao Xiong, et al.

Most existing deep reinforcement learning (DRL) frameworks consider either discrete action space or continuous action space solely. Motivated by applications in computer games, we consider the scenario with discrete-continuous hybrid action space. To handle hybrid action space, previous works either approximate the hybrid space by discretization, or relax it into a continuous set. In this paper, we propose a parametrized deep Q-network (P- DQN) framework for the hybrid action space without approximation or relaxation. Our algorithm combines the spirits of both DQN (dealing with discrete action space) and DDPG (dealing with continuous action space) by seamlessly integrating them. Empirical results on a simulation example, scoring a goal in simulated RoboCup soccer and the solo mode in game King of Glory (KOG) validate the efficiency and effectiveness of our method.


HyAR: Addressing Discrete-Continuous Action Reinforcement Learning via Hybrid Action Representation

Discrete-continuous hybrid action space is a natural setting in many pra...

A0C: Alpha Zero in Continuous Action Space

A core novelty of Alpha Zero is the interleaving of tree search and deep...

Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space

We consider off-policy evaluation (OPE) in continuous action domains, su...

Discrete linear-complexity reinforcement learning in continuous action spaces for Q-learning algorithms

In this article, we sketch an algorithm that extends the Q-learning algo...

Learn Continuously, Act Discretely: Hybrid Action-Space Reinforcement Learning For Optimal Execution

Optimal execution is a sequential decision-making problem for cost-savin...

Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics

Many real-world control problems involve both discrete decision variable...

Generalizable Episodic Memory for Deep Reinforcement Learning

Episodic memory-based methods can rapidly latch onto past successful str...

1 Introduction

In recent years, the field of deep reinforcement learning (DRL) has witnessed striking empirical achievements in complicated sequential decision making problems once believed unsolvable. One active area of the application of DRL methods is to design artificial intelligence (AI) for games. The success of DRL in the Go game

(Silver et al., 2016) provides a promising methodology for game AI. In addition to the Go game, DRL has been widely used in other games such as Atari (Mnih et al., 2015), Robot Soccer (Hausknecht and Stone, 2016; Masson et al., 2016), and Torcs (Lillicrap et al., 2016) to achieve super-human performances.

However, most existing DRL methods require the action space to be either finite and discrete (e.g., Go and Atari) or continuous (e.g. MuJoCo and Torcs). For example, the algorithms for discrete action space include deep Q-network (DQN) (Mnih et al., 2013), Double DQN (Hasselt et al., 2016), A3C (Mnih et al., 2016); the algorithms for continuous action space include deterministic policy gradients (DPG) (Silver et al., 2014) and its deep version DDPG (Lillicrap et al., 2016).

Motivated by the applications in Real Time Strategic (RTS) games, we consider the reinforcement learning problem with a discrete-continuous hybrid action space. Different from completely discrete or continuous actions that are widely studied in the existing literature, in our setting, the action is defined by the following hierarchical structure. We first choose a high level action from a discrete set (we denote by for short); upon choosing , we further choose a low level parameter which is associated with the -th high level action. Here is a continuous set for all .111The low level continuous parameter could be optional. And different discrete actions can share some common low level continues parameters. It would not affect any results or derivation in this paper. Therefore, we focus on a discrete-continuous hybrid action space


To apply existing DRL approaches on this hybrid action space, two straightforward ideas are possible:

  • Approximate by an finite discrete set. We could approximate each by a discrete subset, which, however, might lose the natural structure of . Moreover, when is a region in the Euclidean space, establishing a good approximation usually requires a huge number discrete actions.

  • Relax into a continuous set. To apply existing DRL framework with continuous action spaces, Hausknecht and Stone (2016) define the following approximate space


    where . Here is used to select the discrete action either deterministically (by picking

    ) or randomly (with probability

    ). Compared with the original action space , might significantly increase the complexity of the action space.

In this paper, we propose a novel DRL framework, namely parametrized deep Q-network learning (P-DQN), which directly works on the discrete-continuous hybrid action space without approximation or relaxation. Our method can be viewed as an extension of the famous DQN algorithm to hybrid action spaces. Similar to deterministic policy gradient methods, to handle the continuous parameters within actions, we first define a deterministic function which maps the state and each discrete action to its corresponding continuous parameter. Then we define an action-value function which maps the state and finite hybrid actions to real values, where the continuous parameters are obtained from the deterministic function in the first step. With the merits of both DQN and DDPG, we expect our algorithm to find the optimal discrete action as well as avoid exhaustive search over continuous action parameters.

To evaluate the empirical performances, we apply our algorithm to several environments. Empirical study indicates that P-DQN is more efficient and robust than Hausknecht and Stone (2016)’s method that relaxes into a continuous set and applies DDPG.

2 Background

In reinforcement learning, the environment is usually modeled by a Markov decision process (MDP)

, where is the state space, is the action space,

is the Markov transition probability distribution,

is the probability distribution of the initial state, is the reward function, and is the discount factor. An agent interacts with the MDP sequentially as follows. At the -th step, suppose the MDP is at state and the agent selects an action , then the agent observes an immediate reward and the next state . A stochastic policy maps each state to a probability distribution over , that is, is defined as the probability of selecting action at state . Whereas a deterministic policy maps each state to a particular action in . Let be the cumulative discounted reward starting from time-step . We define the state-value function and the action-value function of policy as and , respectively. Moreover, we define the optimal state- and action-value functions as and , respectively, where the supremum is taken over all possible policies. The goal of the agent is to find a policy the maximizes the expected total discounted reward

, which can be achieved by estimating


2.1 Reinforcement Learning Methods for Finite Action Space

Broadly speaking, reinforcement learning algorithms can be categorized into two classes: value-based methods and policy-based methods. Value-based methods first estimate and then output the greedy policy with respect to that estimate. Whereas policy-based methods directly optimizes as a functional of .

The Q-learning algorithm (Watkins and Dayan, 1992) is based on the Bellman equation


which has as the unique solution. During training, the -function is updated iteratively over the the transition sample in a Monte Carlo way. For finite state space , values can be stored in a table. However, when is too large to fit in computer memory, function approximation for has to be applied. Deep Q-Network (DQN) (Mnih et al., 2013, 2015) approximates

using a neural network

, where is the network weights. In the

-th iteration, the DQN updates the weights using the gradient of the least squares loss function


A variety of extensions are proposed to improve over DQN, including Double DQN (Hasselt et al., 2016), dueling DQN (Wang et al., 2016), bootstrap DQN (Osband et al., 2016), asynchronous DQN (Mnih et al., 2016), averaged-DQN Anschel et al. (2017) and prioritized experience replay (Schaul et al., 2016).

In addition to the value-based methods, the policy-based methods directly model the optimal policy. The objective of policy-based methods is to find a policy that maximizes the expected reward of a stochastic policy parametrized by .

The policy gradient methods aims at finding a weight that maximizes via gradient descent. The stochastic policy gradient theorem (Sutton et al., 2000) states that


The REINFORCE algorithm (Williams, 1992) updates using . Moreover, the actor-critic methods, (Konda and Tsitsiklis, 2000) and vanilla A3C (Mnih et al., 2016) use neural network or to estimate the value function associated to policy . This algorithm combines the value-based and policy-based perspectives together, and is recently used to achieve superhuman performance in the game of Go Silver et al. (2017).

2.2 Reinforcement Learning Methods for Continuous Action Space

When the action space is continuous, value-based methods will no longer be computationally tractable because of taking maximum over the action space in (4), which in general cannot be computed efficiently. The reason is that the neural network is nonconvex when viewed as a function of ; is the global minima of a nonconvex function, which is NP-hard to obtain in the worst case.

To address this issue, the continuous Q-learning (Gu et al., 2016) approximates action value function by neural networks

where is further parameterized as a quadratic function w.r.t . So the maximization over has analytic solution.

Moreover, it is also possible to adapt policy-based methods to continuous action spaces by considering deterministic policies . Similar to (5), the deterministic policy gradient (DPG) theorem (Silver et al., 2014) states that


Furthermore, this deterministic version of the policy gradient theorem can be viewed as the limit of (5

) with the variance of

going to zero. Based on (6), the DPG algorithm (Silver et al., 2014) and the DDPG algorithm (Lillicrap et al., 2016) are proposed. A related line of work is policy optimization methods, which improve the policy gradient method using novel optimization techniques. These methods include natural gradient descent (Kakade, 2002), trust region optimization (Schulman et al., 2015), proximal gradient descent (Schulman et al., 2017), mirror descent (Montgomery and Levine, 2016), and entropy regularization (O’Donoghue et al., 2017).

2.3 Reinforcement Learning Methods for Hybrid Action Space

A related body of literature is the recent work on reinforcement learning with a structured action space, which contains finite actions each parametrized by a continuous parameter.

To handle such parametrized actions, Hausknecht and Stone (2016) applies the DDPG algorithm on the relaxed action space (2) directly. A More reasonable approach is to update the discrete action and continuous action separately with two different methods.

Masson et al. (2016) propose a learning framework that alternately updates the network weights for discrete actions using Q-learning (Sarsa) and for continuous parameters using policy search (eNAC). Similarly, Khamassi et al. (2017) uses Q-learning for discrete actions and policy gradient for continuous parameters. These two methods both need to assume a distribution of continuous parameters and are both on-policy.

3 Parametrized Deep Q-Networks (P-DQN)

This section introduces the proposed framework to handle the application with hybrid discrete-continuous action space. We consider an MDP with a parametrized action space defined in (1). For , we denote the action value function by where , , and . Let be the discrete action selected at time and let be the associated continuous parameter. Then the Bellman equation becomes


Here inside the conditional expectation on the right-hand side of (7), we first solve for each , and then take the largest . Note that taking supremum over continuous space is computationally intractable. However, the right-hand side of (7) can be evaluated efficiently providing is given.

To elaborate this idea, first note that, when the function is fixed, for any and , we can view as a function Then we can rewrite the Bellman equation in (7) as

Note that this new Bellman equation resembles the classical Bellman equation in (3) with . Similar to the deep Q-networks, we use a deep neural network to approximate , where denotes the network weights. Moreover, for such a , we approximate with a deterministic policy network , where denotes the network weights of the policy network. That is, when is fixed, we want to find such that


Then similar to DQN, we could estimate by minimizing the mean-squared Bellman error via gradient descent. In specific, in the -th step, let and be the weights of the value network and the deterministic policy network, respectively. To incorporate multi-step algorithms, for a fixed , we define the -step target by


We use the least squares loss function for like DQN. Moreover, since we aim to find that maximize with fixed, we use the loss function for as following


By (10) we update the weights using stochastic gradient methods. In the ideal case, we would minimize the loss function in (10) when is fixed. From the results in stochastic approximation methods (Kushner and Yin, 2006), we could approximately achieve such a goal in an online fashion via a two-timescale update rule (Borkar, 1997). In specific, we update with a stepsize that is asymptotically negligible compared with the stepsize for . In addition, for the validity of stochastic approximation, we require to satisfy the Robbins-Monro condition (Robbins and Monro, 1951). We present the P-DQN algorithm with experienced replay in Algorithm 1.

  Input: Stepsizes , exploration parameter , minibatch size , a probability distribution .
  Initialize network weights and .
  for  do
     Compute action parameters .
     Select action according to the -greedy policy
     Take action , observe reward and the next state .
     Store transition into .
     Sample transitions randomly from .
     Define the target by
     Use data to compute the stochastic gradient and .
     Update the weights by and .
  end for
Algorithm 1 Parametrized Deep Q-Network (P-DQN) with Experience Replay

Note that this algorithm requires a distribution defined on the action space for exploration. In practice, if each is a compact set in the Euclidean space (as in our case),

could be defined as the uniform distribution over

. In addition, as in the DDPG algorithm (Lillicrap et al., 2016), we can also add additive noise to the continuous part of the actions for exploration. Moreover, we use experience replay (Mnih et al., 2013) to reduce the dependencies among the samples, which can be replaced by more sample-efficient methods such as prioritized replay (Schaul et al., 2016).

Moreover, we note that our P-DQN algorithm can easily incorporate asynchronous gradient descent to speed up the training process. Similar to the asynchronous -step DQN Mnih et al. (2016), we consider a centralized distributed training framework where each process computes its local gradient and communicates with a global “parameter server”. In specific, each local process runs an independent game environment to generate transition trajectories and uses its own transitions to compute gradients with respect to and . These local gradients are then aggregated across multiple processes to update the global parameters. Aggregating independent stochastic gradient decreases the variance of gradient estimation, which yields better algorithmic stability. We present the asynchronous -step P-DQN algorithm in Algorithm 2 in Appendix. For simplicity, here we only describe the algorithm for each local process, which fetches and from the parameter server and computes the gradient. The parameter server stores the global parameters , , and update them using the gradients sent from the local processes.

The key differences between the methods in 2.3 and P-DQN are as follows.

  • In Hausknecht and Stone (2016), the discrete action types are parametrized as some continuous values, say . And the discrete action that is actually executed is chosen via or randomly with probability . Such a trick actually relaxes the hybrid action space into a continuous action space, upon which the classical DDPG algorithm can be applied. However, in our framework, the discrete action type is chosen directly by maximizing the action’s value explicitly as illustrated in Figure 1.

  • Masson et al. (2016) and Khamassi et al. (2017) use on-policy update algorithm for continuous parameters. The network in Hausknecht and Stone (2016) is also an on-policy action-value function estimator of current policy () if the discrete action is chosen via . While P-DQN is an off-policy algorithm.

  • Note that P-DQN can use human players’ data, while it is hard to use human players’ data in Hausknecht and Stone (2016) because there is only discrete action without parameters .

(a) Network of P-DQN (b) Network of DDPG
Figure 1: Illustration of the networks of P-DQN and DDPG (Hausknecht and Stone, 2016). P-DQN selects the discrete action type by maximizing values explicitly; while in DDPG (Hausknecht and Stone, 2016), the discrete action is chosen via or randomly with probability , where can be seen as a continuous parameterization of discrete action types. Note, more complexed structure can be designed in the Q-network of P-DQN in order to structure the parameterized relation between and . In the following experiments, we just input all the parameters into every , which actually means all the discrete actions share the whole continues parameters.

3.1 The Asynchronous -step P-DQN Algorithm

Similar to the asynchronous -step DQN in Mnih et al. (2016), we can use asynchronous -step P-DQN algorithm to speed up the training process. We present the asynchronous -step P-DQN algorithm in Algorithm 2. Notice when , -step DQN or -step P-DQN is no longer an off-policy algorithm. However -step bootstrap tactic can improve the convergence speed for delayed-reward or long-episode reinforcement learning problem.

  Input: exploration parameter , a probability distribution over the action space for exploration, the max length of multi step return , and maximum number of iterations .
  Initialize global shared parameter and
  Set global shared counter
  Initialize local step counter .
     Clear local gradients , .
     Synchronize local parameters and from the parameter server.
         Observe state and let
         Select action according to the -greedy policy
         Take action , observe reward and the next state .
     until  is the terminal state or
     Define the target
     for  do
         Accumulate gradients: ,
     end for
     Update global and using and

with RMSProp (

Hinton et al. (2012)).
Algorithm 2 The Asynchronous P-DQN Algorithm

4 Experiments

We validate the proposed P-DQN algorithm in 1) a simulation example, 2) scoring a goal in simulated RoboCup soccer and 3) the solo mode in game KOG.

To evaluate the performance, we compared our algorithm with Hausknecht and Stone (2016) and DQN under fair condition for all three scenarios. Hausknecht and Stone (2016) seems the only off-policy method we are aware that solves the hybrid action space problem with deterministic policy, which can be estimated more efficiently compared with stochastic policies. DQN with discrete action approximation is also compared in the simulation example. In DQN and P-DQN, we use a dueling layer to replace the last fully-connected layer to accelerate training.

4.1 A Simulation Example

Suppose there is a squared plate in the size . The goal is to “pull” a unit point mass into a small target circle with radius . In each unit time , a unit force , i.e. , with constant direction can be applied to the point mass or a soft “brake” can be used to reduce the velocity of point mass by immediately. The effect of force follows the Newton mechanics and the plate is frictionless.

Let the coordinate of point mass and the target circle center be

, respectively. The state is represented as an 8-dim vector

. The action space is and the reward is given by . The episode begins with random , and terminates if the point mass stops in the circle or runs out the square plate or the episode length exceeds 200.

To deal with the periodic problem of the direction of movement, we use to represent all the direction and learn a normalized two-dimensional vector instead of a degree (in practice, we add a normalize layer at the end to ensure this). The following two experiment also use this transformation.

(a) Episode reward vs. Iteration in training (b) Mean episode reward vs. Iteration in test
Figure 2: Performance of P-DQN in Simulation Example. (a) The learning curves for P-DQN in training. We smooth the original noisy curves (plotted in light colors) to their running average (plotted in dark colors). The proposed algorithm P-DQN converged in less than Iterations. (b) Mean episode reward of 100 trials in test. -greedy exploration is removed in test.
Figure 3: Comparison of three algorithms in Simulation Example. Each algorithm independently trains 5 agents and each dot (plotted in light colors) means an evaluation with average of 100 trials. Then a smoothed curve (plotted in dark colors) is fitted to the points of each agent. The proposed algorithm P-DQN converges fast and stably to a better policy.

We compare the proposed P-DQN with DDPG architecture using the same network hidden layer size. We also compared with DQN in discrete action space with -direction “pull” and “brake” for completeness. We independently train 5 models for each method, and evaluate the performance during the training process with 100 trials’ average. Figure 2, Figure 4 is the learning curves for P-DQN. Figure 3 shows the evaluated performance in respect of mean reward, mean goal percent and mean episode length. P-DQN obviously converges much faster and more stable than its precedent work in our setting. DQN converges quickly but to a sub-optimal solution and suffers from high variance because of discretization of “pull" direction. A demonstration of learned policy of P-DQN can be found at

4.2 Hfo

The Half Field Offense domain is an abstraction of full RoboCup 2D game. We use the same experiment settings with Hausknecht and Stone (2016), scoring goals without goalie. So we just simply summary the settings here and refer the reader to Hausknecht and Stone (2016) for details.

The state of HFO example is a 58 continuously-valued features derived through Helios-Agent2D’s (Akiyama, 2010) world model. It provides the relative position of several important objects such as the ball, the goal and other landmarks. A full list of state features may be found at

The full action space for HFO is: {Dash(power, direction), Turn(direction), Kick(power, direction)}, where all the directions are parameterized in the range of degree and power in . Note moving forward is faster than sideways or backwards so turn the direction before moving is crucial for fast goal.

We also use the same hand-crafted intensive reward:

where and are the distance between the ball and the agent or the center of goal respectively. is an additional reward for the agent at the first time it is close enough to kick the ball. is the final reward for a success goal.

(a) Episode length in training (b) Episode reward sum in training
Figure 4: The learning curves for P-DQN in HFO example. Different training workers are plotted in different colors. We further smooth the original noisy curves (plotted in light colors) to their running average (plotted in dark colors). In the first 250k iterations, it learns to approach and kick the ball. The mean episode length increases because the episode is set to end if the ball is not kicked in 100 frames. After 250k iterations, it learns to goal as quick as possible as the discount factor exists.
Scoring Percent Avg. Step to Goal Scoring Percent Avg. Step to Goal
Helios’ .962 72.0 P-DQN .997 78.1
SARSA .81 70.7 P-DQN .997 78.1
DDPG 1 108.0 P-DQN .996 78.1
DDPG .99 107.1 P-DQN .994 81.5
DDPG .98 104.8 P-DQN .992 78.7
DDPG .96 112.3 P-DQN .991 79.9
DDPG .94 119.1 P-DQN .985 82.2
DDPG .84 113.2 P-DQN .984 87.9
DDPG .80 118.2 P-DQN .979 78.5
Table 1: Evaluation performance Comparison with different methods. The results in left column are borrowed from Hausknecht and Stone (2016). And the performance of P-DQN is evaluated with 1000 trials.

To accelerate training, we use the asynchronous version of Algorithm 1 with 24 workers. Figure 4 shows the learning curve of P-DQN for HFO scenario.

Additionally, we independently trained another 8 P-DQN agents and compared the performance with the baseline results in Hausknecht and Stone (2016). The result is shown in Table 1. We can see P-DQN can score more accurate and faster than DDPG with more stable performance. The training of P-DQN agent costs about 1 hour on 2 Intel Xeon CPU E5-2670 v3. In comparison, it takes three days on a NVidia Titan-X GPU to train a DDPG agent in Hausknecht and Stone (2016). The performance video for P-DQN agent can be found at

4.3 Solo mode of King of Glory

The game King of Glory is the most popular mobile MOBA game in China with more than 80 million daily active players and 200 million monthly active players, as reported in July 2017. Each player controls one hero, and the goal is to destroy the base of the opposing team. In our experiments, we focus on the one-versus-one mode, which is called solo, with both sides being the hero Lu Ban, a hunter type hero with a large attack range. We play against the internal AI shipped with the game.

In our experiment, the state of the game is represented by a 179-dimensional feature vector which is manually constructed using the output from the game engine. These features consist of two parts. The first part is the basic attributes of the units and the second component of the features is the relative positions of other units with respect to the hero controlled by the player as well as the attacking relations between units. We note that these features are directly extracted from the game engine without sophisticated feature engineering. We conjecture that the overall performances could be improved with a more careful engineered set of features.

We simplify the actions of a hero into discrete action types: Move, Attack, UseSkill1, UseSkill2, UseSkill3, and Retreat. Some of the actions may have additional continuous parameters to specify the precise behavior. For example, when the action type is , the direction of movement is given by the parameter , where . Recall that each hero’s skills are unique. For Lu Ban, the first skill is to throw a grenade at some specified location, the second skill is to launch a missile in a particular direction, and the last skill is to call an airship to fly in a specified direction. A complete list of actions as well as the associated parameters are given in Table 2.

In KOG, the 6 discrete actions are not always usable, due to skills level up, lack of Magic Point (MP), or skills Cool Down(CD). In order to deal with this problem, we replace the with when selecting the action to perform, and calculating multi-step target as in Equation 9.

ActionType Parameter Description
Move Move in the direction
Attack - Attack default target
UseSkill1 Skill 1 at the position
UseSkill2 Skill 2 in the direction
UseSkill3 Skill 3 in the direction
Retreat - Retreat back to our base
Table 2: Action Parameters in KoG

4.4 Reward for KoG

To encourage winning the game, we adopt reward shaping, where the immediate reward takes into account Gold earned, Hero HP, kill/death, etc. Specifically, we define a variety of statistics as follows. (In the sequel, we use subscript to represent the attributes of our side and to represent those of the opponent.)

  • Gold difference . This statistic measures the difference of gold gained from killing hero, soldiers and destroying towers of the opposing team. The gold can be used to buy weapons and armors, which enhance the offending and defending attributes of the hero. Using this value as the reward encourages the hero to gain more gold.

  • Health Point difference (): This statistic measures the difference of Health Point of the two competing heroes. A hero with higher Health Point can bear more severe damages while hero with lower Health Point is more likely to be killed. Using this value as the reward encourages the hero to avoid attacks and last longer before being killed by the enemy.

  • Kill/Death . This statistic measures the historical performance of the two heroes. If a hero is killed multiple times, it is usually considered more likely to lose the game. Using this value as the reward can encourage the hero to kill the opponent and avoid death.

  • Tower/Base HP difference
    , . These two statistics measures the health difference of the towers and bases of the two teams. Incorporating these two statistic in the reward encourages our hero to attack towers of the opposing team and defend its own towers.

  • Tower Destroyed . This counts the number of destroyed towers, which rewards the hero when it successfully destroy the opponent’s towers.

  • Winning Game . This value indicates the winning or losing of the game.

  • Moving forward reward: , where is the coordinate of : This value is used as part of the reward to guide our hero to move forward and compete actively in the battle field.

The overall reward is calculated as a weighted sum of the time differentiated statistics defined above. In specific, the exact formula is

The coefficients are set roughly inversely proportional to the scale of each statistic. We note that our algorithm is not very sensitive to the change of these coefficients in a reasonable range.

We use Algorithm 2 with 48 parallel workers and frame skipping. The training and validating performances are plotted in Figure 5.

(a1) Episode length in training (b1) Episode length in training
(a2) Episode reward sum in training (b2) Episode reward sum in training
(a3) Episode reward sum in validation (b3) Episode reward sum in validation
Figure 5: Comparison of P-DQN and DDPG for solo games with the same hero Lu Ban. The learning curves for different training workers are plotted in different colors. We further smooth the original noisy curves (plotted in light colors) to their running average (plotted in dark colors). Usually a positive reward sum indicates a winning game, and vice versa. (a) Performance of P-DQN. (b) Performance of DDPG

From the experimental results in Figure 5, we can see that our algorithm P-DQN can learn the value network and the policy network much faster comparing to Hausknecht and Stone (2016). In (a1), we see that the average length of games increases at first, reaches its peak when the two players’ strength are close, and decreases when our player can easily defeat the opponent. In addition, in (a2) and (a3), we see that the total rewards in an episode increase consistently in training as well as in test settings.

5 Conclusion

Previous deep reinforcement learning algorithms mostly can work with either discrete or continuous action space. In this work, we consider the scenario with discrete-continuous hybrid action space. In contrast of existing approaches of approximating the hybrid space by a discrete set or relaxing it into a continuous set, we propose the parameterized deep Q-network (P-DQN), which extends the classical DQN with deterministic policy for the continuous part of actions. Several empirical experiments with comparison of other baselines demonstrate the efficiency and effectiveness of P-DQN.

Appendix A Appendix

a.1 More information on King of Glory

The game King of Glory is a MOBA game, which is a special form of the RTS game where the players are divided into two opposing teams fighting against each other. Each team has a team base located in either the bottom-left or the top-right corner which are guarded by three towers on each of the three lanes. The towers can attack the enemies when they are within its attack range. Each player controls one hero, which is a powerful unit that is able to move, kill, perform skills, and purchase equipments. The goal of the heroes is to destroy the base of the opposing team. In addition, for both teams, there are computer-controlled units spawned periodically that march towards the opposing base in all the three lanes. These units can attack the enemies but cannot perform skills or purchase equipments. An illustration of the map is in Figure 6-(a), where the blue or red circles on each lane are the towers.

During game play, the heroes advance their levels and obtain gold by killing units and destroying the towers. With gold, the heros are able to purchase equipments such as weapons and armors to enhance their power. In addition, by upgrading to the new level, a hero is able to improve its unique skills. Whereas when a hero is killed by the enemy, it will wait for some time to reborn.

In this game, each team contains one, three, or five players. The five-versus-five model is the most complicated mode which requires strategic collaboration among the five players. In contrast, the one-versus-one mode, which is called solo, only depends on the player’s control of a single hero. In a solo game, only the middle lane is active; both the two players move along the middle lane to fight against each other. The map and a screenshot of a solo game are given in Figure 6-(b) and (c), respectively. In our experiments, we play focus on the solo mode. We emphasize that a typical solo game lasts about 10 to 20 minutes where each player must make instantaneous decisions. Moreover, the players have to make different types of actions including attack, move and purchasing

. Thus, as a reinforcement learning problem, it has four main difficulties: first, the state space has huge capacity; second, since there are various kinds of actions, the action space is complicated; third, the reward function is not well defined; and fourth, heuristic search algorithms are not feasible since the game is in real-time. Therefore, although we consider the simplest mode of

King of Glory, it is still a challenging game for artificial intelligence.

(a) The map of a (b) The battle field for (c) A screen shot of
MOBA game a solo game a solo game
Figure 6: (a) An illustration of the map of a MOBA game, where there are three lanes connecting two bases, with three towers on each lane for each side. (b). The map of a solo game of King of Glory, where only the middle lane is active. (c). A screenshot of a solo game of King of Glory, where the unit under a blue bar is a hero controlled by our algorithm and the rest of the units are the computer-controlled units.

a.2 Parameter setting

Simulation: In Simulation example, we use with replay memory size in Algorithm 1. The network

is in size of 64-32 nodes in each hidden layer, with the Relu activation function and

is in size of 64-32-32. A uniform sample distribution is used in -greedy and is annealed from to over first iterations and stay constant. The learning rate is annealed from to .

HFO: In HFO example, we use and replay memory size for each worker, and the network is in the size of 256-128-64 nodes in each hidden layer, and is in size of 256-128-64-64. A uniform sample distribution is used in -greedy and is annealed from to over first iterations and stay constant. The learning rate is annealed from to .Additionally, Hausknecht and Stone (2016) suggest to use Inverting Gradients to enforce the bounded continues parameters to stay in the value range. Instead of using complicated Inverting Gradients technique, we just add a square loss penalty on the out-of-range part.

KOG: The network or are both in the size of 256-128-64 nodes in each hidden layer, and is in size of 256-128-64-64. To reduce the long episode length, we set the frame skipping parameter to 2. This means that we take actions every 3 frames or equivalently, 0.2 second. Furthermore, we set (4 seconds) in Algorithm 2 to alleviate the delayed reward. In order to encourage exploration, we use -greedy sampling in training with . In specific, the first 5 type actions are sampled with probability of each and the action “Retreat” with probability . Moreover, if the sampled action is infeasible, we execute the greedy policy from the feasible ones, so the effective exploration rate is less than . The learning rate in training is fixed at 0.001 in training.


  • Akiyama (2010) H. Akiyama. Agent2d base code. 2010.
  • Anschel et al. (2017) O. Anschel, N. Baram, and N. Shimkin. Averaged-dqn: Variance reduction and stabilization for deep reinforcement learning. In

    Proceedings of the 34th International Conference on Machine Learning (ICML-17)

    , pages 176–185, 2017.
  • Borkar (1997) V. S. Borkar. Stochastic approximation with two time scales. Systems & Control Letters, 29(5):291–294, 1997.
  • Gu et al. (2016) S. Gu, T. Lillicrap, I. Sutskever, and S. Levine. Continuous deep -learning with model-based acceleration. In Proceedings of the 33rd International Conference on Machine Learning (ICML-16), pages 2829–2838, 2016.
  • Hasselt et al. (2016) H. v. Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, pages 2094–2100. AAAI Press, 2016.
  • Hausknecht and Stone (2016) M. Hausknecht and P. Stone. Deep reinforcement learning in parameterized action space. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • Hinton et al. (2012) G. Hinton, N. Srivastava, and K. Swersky. Neural networks for machine learning-lecture 6a-overview of mini-batch gradient descent, 2012.
  • Kakade (2002) S. M. Kakade. A natural policy gradient. In Advances in neural information processing systems, pages 1531–1538, 2002.
  • Khamassi et al. (2017) M. Khamassi, G. Velentzas, T. Tsitsimis, and C. Tzafestas. Active exploration and parameterized reinforcement learning applied to a simulated human-robot interaction task. In Robotic Computing (IRC), IEEE International Conference on, pages 28–35. IEEE, 2017.
  • Konda and Tsitsiklis (2000) V. R. Konda and J. N. Tsitsiklis. Actor-critic algorithms. In Advances in neural information processing systems, pages 1008–1014, 2000.
  • Kushner and Yin (2006) H. Kushner and G. Yin. Stochastic Approximation and Recursive Algorithms and Applications. Stochastic Modelling and Applied Probability. Springer New York, 2006. ISBN 9780387217697.
  • Lillicrap et al. (2016) T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • Masson et al. (2016) W. Masson, P. Ranchod, and G. Konidaris. Reinforcement learning with parameterized actions. In AAAI, pages 1934–1940, 2016.
  • Mnih et al. (2013) V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
  • Mnih et al. (2015) V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015.
  • Mnih et al. (2016) V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML-16), pages 1928–1937, 2016.
  • Montgomery and Levine (2016) W. Montgomery and S. Levine. Guided policy search as approximate mirror descent. arXiv preprint arXiv:1607.04614, 2016.
  • O’Donoghue et al. (2017) B. O’Donoghue, R. Munos, K. Kavukcuoglu, and V. Mnih. Pgq: Combining policy gradient and -learning. In Proceedings of the International Conference on Learning Representations (ICLR), 2017.
  • Osband et al. (2016) I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped dqn. In Advances in Neural Information Processing Systems, pages 4026–4034, 2016.
  • Robbins and Monro (1951) H. Robbins and S. Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951.
  • Schaul et al. (2016) T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In Proceedings of the International Conference on Learning Representations (ICLR), 2016.
  • Schulman et al. (2015) J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1889–1897, 2015.
  • Schulman et al. (2017) J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
  • Silver et al. (2014) D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 387–395, 2014.
  • Silver et al. (2016) D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural networks and tree search. Nature, 529(7587):484–489, 2016.
  • Silver et al. (2017) D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, Y. Chen, T. Lillicrap, F. Hui, L. Sifre, G. van den Driessche, T. Graepel, and D. Hassabis. Mastering the game of go without human knowledge. Nature, 550(7676):354–359, 2017.
  • Sutton et al. (2000) R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
  • Wang et al. (2016) Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas. Dueling network architectures for deep reinforcement learning. In Proceedings of the 33rd International Conference on Machine Learning (ICML-16), pages 1995–2003, 2016.
  • Watkins and Dayan (1992) C. J. Watkins and P. Dayan. Q-learning. Machine learning, 8(3-4):279–292, 1992.
  • Williams (1992) R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992.