Enhanced Rolling Horizon Evolution Algorithm with Opponent Model Learning: Results for the Fighting Game AI Competition

03/31/2020 ∙ by Zhentao Tang, et al. ∙ Queen Mary University of London 0

The Fighting Game AI Competition (FTGAIC) provides a challenging benchmark for 2-player video game AI. The challenge arises from the large action space, diverse styles of characters and abilities, and the real-time nature of the game. In this paper, we propose a novel algorithm that combines Rolling Horizon Evolution Algorithm (RHEA) with opponent model learning. The approach is readily applicable to any 2-player video game. In contrast to conventional RHEA, an opponent model is proposed and is optimized by supervised learning with cross-entropy and reinforcement learning with policy gradient and Q-learning respectively, based on history observations from opponent. The model is learned during the live gameplay. With the learned opponent model, the extended RHEA is able to make more realistic plans based on what the opponent is likely to do. This tends to lead to better results. We compared our approach directly with the bots from the FTGAIC 2018 competition, and found our method to significantly outperform all of them, for all three character. Furthermore, our proposed bot with the policy-gradient-based opponent model is the only one without using Monte-Carlo Tree Search (MCTS) among top five bots in the 2019 competition in which it achieved second place, while using much less domain knowledge than the winner.



There are no comments yet.


page 1

page 4

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Video games are able to model a range of real-world environments without much burden and unpredictable disruption, making them ideal testbeds for algorithms that might be slow or dangerous when performed in reality. Two-player zero-sum games have gained more and more attention from the game Artificial Intelligence (AI) community. Many related algorithms and frameworks have been proposed to address two-player game problems. Monte-Carlo Tree Search (MCTS), is one of the most famous algorithms and has been widely used in turn-based games, such as Go [21, 23], Chess [22], and Poker [6]. MCTS-based approaches have reached or even surpassed top human players in recent years in many turn-based games, especially when combined with Deep Reinforcement Learning algorithms. Nevertheless, MCTS-based algorithms require a large number of iterations to select an effective planning path, and also have some computational overheads which has limited their application to real-time video games where decisions are needed within a few tens of milliseconds.

In recent years, another algorithm named Rolling Horizon Evolution Algorithm (RHEA) has successfully solved many real-time control tasks [15] and video games [16, 17]. RHEA shortens the gap to MCTS and even outperforms in some games [2]. In contrast to MCTS, RHEA uses a rolling horizon evolution technique to search for optimal action sequences. Compared to MCTS, RHEA has a lower computation overhead and a better memory of its preferred course of action, which can make it better suited to real-time video games.

Fig. 1: Screenshot of FightingICE.

The FightingICE game platform is shown in Fig. 1, and was developed by the ICE Lab of Ritsumeiken University. This has been used as the platform for the Fighting Game AI Competition (FTGAIC) series since 2015, with the platform undergoing a number of enhancements over the years to provide a faster and more robust forward model, and to overcome various exploits and provide a greater challenge.

Each player chooses a character and takes actions, such as walk, jump, crouch, punch, kick, and guard to fight. The goal is to beat down the opponent and avoid being hit. Real-time demands (make a quick response in a short moment), incomplete information (simultaneous moves without the knowledge of opponent instant response from current observations), complex and changeable state-action space (each character has its own action space) are the main challenges of fighting game. In this work, FightingICE

[11] is chosen as our test platform, which particularly provides a simulator as the forward model for evolution planning. This simulator can be used as the model planner and provides a way to apply search-based algorithms in the game. But at each frame, there can be at most 56 actions for each character, and the search is required to be completed within 16.67ms.

In this paper, we first apply Rolling Horizon Evolution Algorithm to design a fighting AI. It optimizes the candidate action sequences through the evolutionary process via finite iterations. Experiment results show that RHEA matches MCTS performance and even outperforms some MCTS-based bots. However, RHEA neglects opponent action selection, limiting the competitiveness of the optimized results. To deal with that, a variety of opponent models are introduced in RHEA, here we call it RHEAOM, to represent opponent action selection. A neural network is designed to infer the opponent next action according to its current state. The network is optimized based on new observations after each round. According to experimental results, the RHEAOM bot shows promising adaptability. The learned model is adapted after each round of the game, so provides no advantage in the first round, but usually leads to a significant improvement in subsequent rounds.

The rest of this paper is organized as follows. Section II briefly reviews related work of Fighting AI games. Section III presents the main algorithm and model proposed in this paper. Section IV describes the experimental setup and gives the results. Finally, Section V draws our conclusion and discusses future work.

Ii Related Work

Scripted-based methods are widely used in game AI design and completely depend on human design and expert experience [31, 14]. For FightingICE, [18] adopts the UCB algorithm to select the rule-based controllers at a given time. [13] designs a real-time dynamic scripting AI, and it won the 2015 competition. Nevertheless, the scripted-based agent is constrained by limited cases and is easily exploited by opponents.

MCTS-based agents have dominated FTGAIC since 2016. FightingICE platform provides a built-in simulator and makes MCTS applicable in real-time fighting game [33]. To take account of opponent behavior, [9] incorporates the manual action table for the opponent strategy modeling. But this method cannot defeat the 2016 FTGAIC winner, because it is under constraint by a small action table and fails in considering those more complicated cases.

Deep reinforcement learning (DRL) has demonstrated impressive results in many real-time video games [34, 26, 19, 27, 20, 28], such as Atari, Go, StarCraft, Dota2, and VizDoom. [32] uses deep Q learning to show its potential for the two-player real-time fighting game. [25] applies Hybrid Reward Architecture (HRA) based DRL into a fighting game AI. HRA decomposes the reward function into multiple components and learns the value functions separately. Though HRA-based DRL has shown promising performance, it is still defeated by MCTS-based AI GigaThunder, who is the champion of 2017 FTGAIC.

Opponent model approaches are mainly categorized as implicit and explicit opponent modeling. The implicit way is to maximize the agent’s own expected reward without having to estimate the opponent’s behavior

[5]. While the explicit way directly predicts the strategy of the opponent [3]. Compared with the implicit way, the explicit way is more efficient in training and more explainable in inference, and it is easier to combine with other existing algorithms. For these reasons we use an explicit opponent model in this work.

Iii Main Algorithm and Model

In this section, we propose a new algorithm that combines the rolling horizon evolution algorithm with an opponent learning model to design our real-time fighting game agent.

Iii-a Rolling Horizon Evolution Algorithm

RHEA is an optimization process that evolves action sequences through forward model. After the optimization process, RHEA selects the first action of the sequence with the best fitness to perform in the task [15, 1]. The flowchart of RHEA is shown in Fig. 2.

Fig. 2: Flow diagram of RHEA.

The population consists of multiple individuals that represent different action sequences. Each action in the sequence is viewed as a gene. A certain number of individuals are selected according to the fitnesses. Then, these individuals are assigned to crossover and there is a probability of mutation to generate more potential or more powerful offspring. Afterwards, individuals are rolled out as the action sequences and the action sequences are inputted orderly into forward model to infer the future states. The future states are evaluated by the score function to obtain new fitness. The above optimization process is repeated until the time budget is consumed.

In our implementation of RHEA, the action sequence in each individual is initialized randomly. After initialization, a group of individuals are created with the same length, and each individual is evaluated by the fitness function as




here is the weight for diversity and score. represents the action sequence of an individual and is the opponent action sequence both with length . is the state at timestep . is the hit point of corresponding player. is the fitness function, weighted averaged on score function and diversity function . The score function is used to evaluate the value of , and the diversity function is used to avoid falling into the local optimal solution. is the population size. is the action in sequence. is the count function of the occurrence of one gene in current population. is the forward model that determines the future state according to the current state and the action sequences.

Suppose we have a total of individuals in current generation. Top highest scored individuals are picked as elites and are preserved in the next generation. The remained

individuals are evolved with elites by crossover with the uniform cross operator, and parents are randomly drawn from elites and remained individuals respectively. Afterwards, one gene is selected randomly from the individual, and it is mutated into another valid gene through a uniform distribution. Finally, these latest individuals are reevaluated by the fitness function. If there is still a time budget, select top

sorted individuals for next generation and repeat the above evolution again. Otherwise, command the first action from the highest sorted individual. The whole process of rolling horizon evolution algorithm with the opponent model for the fighting game is given in Algorithm 1.

: candidate action set.
: number of population and elites.
: threshold of mutation probability.
: weight for score and diversity.
: an opponent model to infer enemy action.
action for the fighting game.
randomly generates action sequences with length from .
sort according to from high to low.
while  do
      top elites from .
      rest individuals from .
      create new individuals, by uniformly crossover individual and , where from and from for () times
     if mutation probability  then
          mutate one gene from .      
      sort by from high to low.
choose the highest sorted action sequence from .
return The first action of sequence.
function Forward()
     for  do
          Initialize as current frame.
         for  do
               Infer the opponent action.
     return Each action sequence’s final frame.
function Evaluate()
      Score evaluation (2).
      Diversity evaluation (3).
     return Fitness evaluation.
Algorithm 1 Rolling Horizon Evolution Algorithm with Opponent Model (RHEAOM) for Fighting Game.

In our fitness definition, a criterion of action occurrence frequency is included, and it reflects the gene diversity in population. According to (3), high diversity is preferred, it is mainly because high diversity helps explore more feasible solutions and avoids stuck in a local optimum.

Fig. 3: Flow diagram of RHEAOM for Fighting Game AI.

Iii-B Opponent Learning Model

Since RHEA merely considers future behavior of itself, it cannot directly infer which action will be taken by the opponent. Obviously, it will mislead the evaluation with only self-consideration. Despite [9] has incorporated an opponent model in MCTS by setting up an activity table as its opponent model, such a model does not lead to a significant improvement in performance.

To deal with that, we propose an opponent learning model for the real-time fighting game, and it is conducive to RHEA with more credible rolling horizon evolution. The opponent learning model uses a one-step look-ahead for the opponent behavior inference and learning. Inspired by the excellent fitting performance of neural networks, the neural-network-based model is constructed as the opponent model.

B.1. Supervised Learning based Opponent Model

Under the real-time restriction and memory limitation, a simple neural network model is suitable for this task. It has 18 numerical inputs, including both hit point, energy, coordinate of the arena in x-axis and y-axis, state of the character, and the relative distance between two characters in the directions of x and y respective. The detail of input features will be described in the Section IV

. The output of the layer has a total of 56 nodes corresponding to all actions in FightingICE. The loss function is the standard cross-entropy as



is the actual action label from the opponent history as one-hot vector form, and

is the network output. In contrast to conventional off-line supervised learning for recognition tasks, this neural network based opponent model uses on-line training with the latest observations for adaption against different opponents.

B.2. Reinforcement Learning based Opponent Model

In addition to use cross-entropy-based supervised learning for the opponent model, we also adopt the reinforcement learning based on reinforcement learning: Q-learning [30] and policy gradient [24]. In the traditional reinforcement learning setting, agents should sample the action to interact with the environment through some techniques, likes

-greedy, UCB, and Thompson Sampling. But here the reinforcement learning is only for evolution in RHEA and the opponent has its own policy to interact with the fighting game.

Iii-B1 Q-Learning based Opponent Model

In order to accelerate the convergence of the learning process, the training goal is the -step return:


The opponent model parameters are updated by minibatch gradient descent to minimize the mean-square loss


Iii-B2 Policy-Gradient based Opponent Model

We also adopt another typical reinforcement learning update rule named policy gradient. This method directly optimizes an agent policy, which is parameterized by , by performing gradient ascent on the estimation of the expected discount total reward from scratch. The gradient of the policy-gradient method is


where the cumulative reward . It is noteworthy that reward in (6) and (8) is the hit point difference between the two sides and is defined as


where is the initial hit point of player. Positive reward means the opponent has more hit point than self-own at the next state and vice versa.

With the aid of the opponent model, there is a fictitious player to generate opponent behaviors. RHEAOM is able to make more resultful evaluation for self-action sequences. The flow diagram is presented in Fig. 3. The notations in Fig. 3 are kept the same as shown in Algorithm 1, except denotes the action of the processed individual.

Iv Experiments

Iv-a Experiments Setup

In this section, we introduce the fighting game AI platform (FightingICE) to which we apply RHEAOM, and describes details of state features of opponent model, network architecture, and training regimes.

Platform & Setup. FightingICE provides a real-time fighting game platform and it is suitable for fighting game AI test. The player is required to choose one of 56 actions to perform in FightingICE within 16.67ms each frame. To simulate the reaction delay of human players, FightingICE has 15 frames delay which means the bot is accessible to only 15 earlier frames. Furthermore, there are three different characters in FightingICE, ZEN, GARNET (GAR) and LUD. These three characters have entirely different strong combinations of actions toward the same state. In summary, FightingICE gives the challenges on real-time planning and decision, simultaneous moves of two sides with time-delay observation, and generalization of various characters.

Both two sides at the fighting test are forced to use the same character by the rule of FTGAIC for a fair play. The goal of each player is to defeat the opponent within a limited time, which is 60s in a round. In this game, we design the agent to choose from a set of discrete actions: Stand, Dash, Crouch, Jump[direction], Guard, Walk[direction], Air Attack[type], Ground Attack[type], and so forth. In the FightingICE, the action execution of each player contains three stages: startup, active and recover. It indicates that the action has to spend a certain number of frames to be executed. Once the action is taken, then it cannot be interrupted unless the player is under attack by the opponent.

Comparative evaluations are set up to verify the performance of RHEA agents with and without the opponent model by self comparison and versus 2018 FTGAIC bots. In consideration of the game balance, we select the same character GAR, LUD, and ZEN for both sides. Each opponent model is initialized randomly and has been examined in 200 rounds with five times repeatedly. According to the rule of FTGAIC, initial hit point of each character is set to 400 and the initial energy is set to 0. Each round is ended if hit point of either player reaches 0 or 60 seconds elapse. The one who has higher hit point is the winner of the round. The relative hyper-parameters of RHEAOM are demonstrated in Table I.

Parameters value description
0.5 Weight for diversity and score.
1e-4 Learning rate.
0.85 Probability of mutation.
7 The number of the total individuals.
4 Length of the action sequence.
1 The number of the elite in the population.

TABLE I: Hyper-parameters Set for RHEAOM Agent

In order to meet the real-time requirement, the hyper-parameters are relatively small except the probability of mutation, since this higher probability will encourage to explore a better solution. Besides, we have tried the technique called Shift Buffer, but it does not make much difference since it may be suitable for long-horizon planning and the length of the action sequence here is relatively small.

N - - - 58.2(3) 59.7(3) 56.3(3) 50.9(3) 82.2(1) 67.8(3) 34.2(4) 80.1(4) 72.2(3) 51.5(4) 81.9(6) 79.3(3)
R 41.8(3) 40.3(3) 43.7(3) - - - 49.0(4) 61.4(3) 61.5(2) 43.4(3) 64.8(3) 65.5(4) 52.0(3) 68.0(2) 66.7(5)
SL 49.1(3) 17.8(1) 32.2(3) 51.0(4) 38.6(3) 38.5(2) - - - 46.2(3) 47.4(3) 51.2(3) 53.8(4) 52.0(3) 59.4(3)
Q 65.8(4) 20.0(4) 27.9(3) 56.6(3) 35.2(3) 34.5(4) 53.9(3) 52.6(3) 48.8(3) - - - 54.1(5) 53.0(4) 54.2(4)
PG 48.5(4) 18.1(6) 20.7(3) 48.0(3) 32.0(2) 33.3(5) 46.2(4) 48.1(3) 40.6(3) 45.9(5) 47.0(4) 45.9(4) - - -
Mean 51.3 24.1 31.1 53.5 41.4 40.7 50.0 61.1 54.7 42.4 59.8 58.7 52.8 63.7 64.9
TABLE II: Win rate of self comparison for models in first row. Every test is repeated for five times to average the win rate. These are the five variant opponent models with RHEA, N:None, R:Random, SL:Supervised Learning, Q:Q-Learning, PG:Policy Gradient, and the last Mean

row is the average win rates over the four opponents. The highest values are in bold. The values in parentheses denote the 95% confidence interval, for example 58.2(3)=58.2


State Features of Opponent Model. Opponent model is used to estimate which action will be most probably and effectively taken by the opponent. There are a total of 18 input features for network and their details are as follows:

  • Hit point (1-2), hit points of p1 and p2.

  • Energy (3-4), energies of p1 and p2.

  • Coordinate x and y (5-8), locations of p1 and p2.

  • States (9-16)

    , one-hot encoding for character states (Stand, Crouch, Air, Down) of p1 and p2.

  • Distance (17-18), relative distance between p1 and p2 in the directions of x-axis and y-axis.

Here p1 and p2 respectively represent player 1 and player 2. All these input features are normalized into [0, 1] by their maximum values, such as the maximum of hit point and energy, the width and height of the fighting stage for x-axis and y-axis, except the one-hot encoding for character state. It is noteworthy that these game states are provided by the game engine with a certain number of delayed frames, so there is certain bias if bot directly uses these states to make decision. In order to address this problem, we adopt the forward model to plan the next state with currently processed state.

Architecture & Training.

Since RHEAOM optimizes the solution through evolution and iteration, the opponent model has to be as simple and concise as possible for fast and multiple iterations. The opponent model only consists of a single input and output layer without any hidden layer. As mentioned above there are 18-bit units (State Feature) as input layer and 56-bit units (Discrete Action Set) as output layer. These two layers are fully connected with the linear activation function. In addition, discrete action distribution is generated from the output layer via the Softmax function. We adopt XAVIER

[4] for the network initialization and Adam [10]

as the network optimizer. In addition, other more complicated network architectures, such as multilayer perceptron and long short term memory network, have been tested but their performance is worse than the simple architecture.

In each round, opponent state-action pairs are first recorded in a dataset. At the end of the round, the opponent model is trained by the latest dataset. There are about five seconds for the preparation of next round. Once a new round starts, the dataset is emptied. There are two reasons for postponing the training at the round end. First, it concedes the time budget for action decision by RHEA at each frame. Secondly, it helps improve the training stability and reliability compared to instant update after each frame.

Iv-B Self Comparison

We perform comparative evaluations to validate the three key factors of RHEAOM. First, we test the effect of the opponent model in the RHEA framework by comparing it against the None-opponent model and the Random-opponent model. The None-opponent model means that we assume the opponent does not take any action but just stands on the ground, while the Random-opponent model means that the opponent’s action is randomly sampled from the valid candidate action set. For instance, the opponent cannot take any ground action when it is in the air, so the candidate valid action set consists of just air actions, such as flying attack and flying guard. Second, we verify the effect of different training rules for the opponent model, including cross-entropy-based supervised learning, Q-learning-based, and policy-gradient-based reinforcement learning. Third, we observe the winning convergence efficiency of whether using an opponent learning model or not for three characters. We also verify whether the opponent model can aid to accelerate the reach to equilibrium, that is the equal performance for both sides.

Here we set up five variant versions of RHEA to test their performance,

  • RHEA, vanilla RHEA without opponent model.

  • RHEAOM-R, RHEA combines with random opponent model.

  • RHEAOM-SL, RHEA combines with supervised-learning-based opponent model.

  • RHEAOM-Q, RHEA combines with Q-learning-based opponent model.

  • RHEAOM-PG, RHEA combines with policy-gradient-based opponent model.

All these RHEA-based algorithms are tested against other variants except themselves. As shown in Table II, RHEAOM-PG has superior performance for three characters when fighting against other variants. Though RHEAOM-PG does not achieve the highest mean win rate as GAR, it still exceeds other models over 50 percent for three characters, and it defeats RHEAOM-R which has the highest mean win rate as GAR.

Fig. 4: The curves of the average win rate means RHEA-based bots fight against itself with the increasing of iterations.

In order to test the convergence efficiency of self-play, we set up RHEA with and without opponent model to fight against itself for all three characters. According to the experiment results in Table II, RHEAOM-PG performs better than other RHEA-based approaches. In order to observe the convergence curve, we select RHEA, RHEAOM-R, RHEAOM-SL, RHEAOM-Q, and RHEAOM-PG for comparisons. As presented in Fig. 4, the win rate converges through the iterative process. RHEA and RHEAOM-R show more obvious vibration in terms of win rate at the early and middle phases than RHEAOM-PG. RHEAOM-Q presents obvious vibration at the final phase. This experiment also shows that RHEAOM-SL and RHEAOM-PG converges faster to their equilibriums than other variants.

Th 82.3(6) 62.3(4) 70.5(7) 78.3(6) 62.2(3) 50.5(5) 84.0(2) 83.6(2) 69.8(2) 84.5(5) 88.5(2) 69.0(3) 89.8(4) 89.3(1) 76.4(3)
KT 66.0(7) 90.3(3) 88.9(4) 74.0(3) 86.9(2) 11.3(1) 78.6(2) 95.8(1) 77.1(2) 66.1(3) 92.2(1) 72.8(6) 78.2(4) 92.5(1) 72.1(7)
Jay 98.5(1) 62.3(4) 63.7(1) 100.0 81.5(2) 64.3(4) 96.6(2) 91.0(3) 87.8(8) 98.0(1) 94.7(2) 92.0(2) 98.0(1) 94.6(1) 97.0(1)
Mo 78.5(2) 64.0(1) 70.0(6) 77.1(2) 81.6(2) 39.1(6) 80.8(6) 90.9(3) 88.5(1) 83.6(2) 87.2(3) 82.8(2) 82.1(3) 89.0(3) 93.8(4)
Ut 97.3(1) 69.1(2) 73.9(6) 96.8(1) 87.3(3) 87.9(1) 98.1(1) 94.6(1) 96.9(2) 99.7(1) 98.6(2) 96.9(2) 100.0 100.0 97.1(1)
Mean 84.5 69.6 73.4 85.2 79.9 50.6 87.6 91.2 84.0 86.4 92.3 82.7 89.6 93.1 87.3
TABLE III: Win rate against 2018 FTGAIC Bots. Every test is repeated for five times to average the win rate. These are the five java-based bots from 2018 FTGAIC, Th:Thunder, KT:KotlinTestAgent, Jay:JayBotGM, Mo:MogakuMono, Ut:UtalFighter. The last Mean row denotes the average win rates over the five competition bots. The highest values are in bold. The values in parentheses denote the 95% confidence interval.

Iv-C Versus 2018 FTGAIC Bots

To measure the performance of our proposed frameworks in FTGAIC, we choose five Java-based bots from the 2018 FTGAIC as opponents. Since FightingICE is equipped with a simulator that can be considered as the forward model, most bots are designed based on MCTS. The five bots considered here are listed below according to their ranks in 2018 FTGAIC:

  • Thunder,

    , based on MCTS with different heuristic settings towards different characters.

  • KotlinTestAgent, , utilizes a hybrid solution based on MCTS selection optimization and smart corner case strategy.

  • JayBotGM [7],

    , makes use of the combination of genetic algorithm and MCTS.

  • MogakuMono, , conducts a hierarchical reinforcement learning framework into the fighting agent.

  • UtalFighter, , adopts a simple finite state machine to make a decision.

Due to UtalFighter is a script-based agent, it can be directly used to represent the learning curve of our proposed opponent learning models. Fig. 5 presents the hit point difference against UtalFighter by our variant bots. The curve is averaged by the latest 50 round results at each epoch and it tends to get converge over time. Larger hp difference indicates the bot is more competitive to a certain degree. Though RHEAOM-SL improves itself steadily, RHEAOM-Q and RHEAOM-PG both have achieved better performance at the early phase. Besides, RHEAOM-PG shows better than RHEAOM-Q. The trend of RHEAOM-R is other than other variants of RHEAOM and its curve is still varying randomly, but RHEAOM-R also has a larger hp difference than RHEA. Because of the absence of the opponent model, RHEA has the worst performance among all variants.

Fig. 5: The curves of the mean hp difference means RHEA-based bots fight against UtalFighter with the increasing of iterations.

According to Table III, RHEA is able to win all bots from 2018 FTGAIC for all three characters. However, RHEAOM-R performs worse than RHEA especially in character ZEN, since the random opponent model may ruin the evaluation process of the rolling horizon and mislead the bot to inappropriate decisions. The combination with supervised-learning-based and reinforcement-learning-based opponent model both result in significant improvement to RHEA. For instance, RHEA is only slightly better than UtalFighter while all RHEAOMs defeat UtalFighter in more than 85 percent of games.

All variants of RHEAOM show competitive performance against all opponents. Opponent model treats opponent as part of the observation and can respond to adaptive opponents. And the change in policy is explicitly encoded by the neural network model. RHEAOM-PG achieves the best performance since it not just mimic the opponent’s behavior but also find out the most advantageous action from the opponent.

Since all these strong opponent agents are based on MCTS and the time is limited in a real-time video game, it cannot guarantee that the optimization process has converged to an optimal solution as sampling-based optimization methods require many simulations. Compared with MCTS, RHEA is more efficient because of its simplicity and the correlation in action sequence. In addition, the fighting game is a dynamic process without a fixed optimal strategy to win every game. The bot should give a dynamic adaptive response to opponent’s behavior, thus it cannot ensure to obtain a suitable response towards opponent unless one side is always defeated.

Iv-D Comparisons between RHEAOM and MCTSOM

In order to inspect the adaption of our opponent learning model for other statistical forward planning algorithms such as Monte-Carlo Tree Search, we set up the comparison experiment between RHEAOM and MCTSOM. From the above experiments, supervised-learning-based opponent model and policy-gradient-based opponent model show the best performance among RHEA variants, so these opponent models are introduced into Thunder, which is the strongest MCTS-based fighting bot in above opponents. We call the variants of Thunder are ThunderOM-SL (supervised-learning-based) and ThunderOM-PG (policy-gradient-based) respectively.

The results of RHEAOM against ThunderOM are presented in Table IV. In terms of winning rate, ThunderOMs have been improved in all three characters when compared with Thunder, and ThunderOM-PG is slightly better than RHEAOMs when the character is ZEN. The results indicate that our proposed opponent learning model is well suitable for the statistical forward planning algorithms, not only RHEA but also MCTS.

ThunderOM-SL 75.3(4) 68.4(5) 58.1(6) 81.1(4) 69.6(2) 54.2(4)
ThunderOM-PG 77.2(3) 55.6(4) 49.6(4) 79.1(5) 60.3(3) 48.3(3)
Thunder 84.0(2) 83.6(2) 69.8(2) 89.8(4) 89.3(1) 76.4(3)
TABLE IV: Win rate against ThunderOMs. Every test repeated for five times to average the win rate against MCTS with supervised-learning-based and policy-gradient-based opponent model. The Mean row denotes the average win rates against ThunderOM bots. The values in parentheses denote the 95% confidence interval.

Iv-E Results on 2019 FTGAIC

To further verify the performance of our proposed framework for fighting game AI, according to the above experimental results, we choose Rolling Horizon Evolution with Policy-gradient based opponent learning model (named RHEAPI, but actually it is RHEAOM-PG) to participate the 2019 FTGAIC, which is sponsored by 2019 IEEE Conference on Games (CoG).

Name of Bot Score Rank
ReiwaThunder 133 1
RHEAPI (ours) 122 2
Toothless 91 3
FalzAI 68 4
LGISTBot 67 5
SampleMctsAi (baseline) 52 6
HaibuAI 32 7
DiceAI 19 8
MuryFajarAI 17 9
TOVOR 9 10
TABLE V: Rank of 2019 FTGAIC.
Fig. 6: Win rate compared with hp difference and frame cost of 2019 FTGAIC top five bots in Standard League.
Fig. 7: Win rate compared with hp difference and frame cost of 2019 FTGAIC top five bots in Speedrunning League.
Fig. 6: Win rate compared with hp difference and frame cost of 2019 FTGAIC top five bots in Standard League.

Performance in competition. There are a total of 10 bots participating in this competition. As presented in Table V, ReiwaThunder wins the first place, which is an improved version of 2018 champion Thunder. While our RHEAPI wins the runner-up of the competition and its score comes very close to the first place. According to official statistics, RHEAPI wins two or three games less than ReiwaThunder and wins over the ReiwaThunder for character LUD, whose action data is not available in advance. It demonstrates that RHEA with an opponent learning model is a competitive unified framework for fighting game AI. Here we list the brief description of top five bots in 2019 FTGAIC:

  • ReiwaThunder, , based on Thunder while replacing MCTS with MiniMax and a set of heuristic rules for each character.

  • RHEAPI, , RHEA combines with policy-gradient-based opponent model.

  • Toothless [29], , based on KotlinTestAgent and with a combination of MiniMax, MCTS and some basic rules.

  • FalzAI, , MCTS-based bot combines with switchable general strategy, including aggressive and defensive.

  • LGISTBot [8], , hybrid methods with MCTS and genetic action sequence.

Note that there are two leagues to fully verify the performance of participants. The Standard League is set for competition among all bots, and the winner is the one with the maximum of average win rate against all other bots. The Speedrunning League is set for fighting against official bot SampleMctsAi, and the winner is to beat the SampleMctsAi with the shortest average time. In order to directly evaluate our agent, we focus on the three key factors win rate, hp difference, and frame cost from the competition logs. The competition results of the top five bots in 2019 FTGAIC are shown in Fig. 7 and Fig. 7.

The win rate of RHEAPI is quite close to ReiwaThunder whether in the Standard League or the Speedrunning League. In the Standard League, RHEAPI causes the largest amount of hp difference to the opponents than other bots. Beyond that, the average frame cost of RHEAPI is the lowest among all bots for three characters. It indicates that RHEAPI is more aggressive and effective to beat the opponents. When in the Speedrunning League, RHEAPI spends more frames to beat the baseline bot than ReiwaThunder except for in character LUD. It means that heuristic knowledge of ReiwaThunder still plays an important role when fights against the specific and known opponent. On the whole, there is a negative correlation between hp difference and frame cost to a certain extent.

Iv-F Discussion

RHEA is an efficient statistical forward planning approach, and the goal of RHEA is to search for the best action sequence for decision and planning. However, fighting game is a real-time two-player zero-sum game, so it is insufficient to consider only one-side action sequence and neglect the other side.

To address this deficiency, we propose three variants of opponent learning models RHEAOM-SL, RHEAOM-Q, and RHEAOM-PG respectively. RHEAOM-SL directly mimics the opponent’s behaviors in the game. However, it is easily to be misled when the opponent adopts unsuitable response to the other side. Unlike supervised learning, reinforcement learning does not directly learn the mapping rule of state-action pairs but instead learns the optimal opponent policy with reward signals. From this point of view, we propose two opponent learning models that are based on two effective reinforcement learning approaches: Q-learning and policy gradient. According to the Fig. 4, compared to the overestimation or underestimation problems in Q-learning, policy gradient-based approach finds the optimal opponent policy more rapidly and steadily, which leads strong performance of RHEAOM-PG in the fighting game.

Constrained by the limit energy, it suggests the uneven distribution of the generated state-action pairs from fighting game. Since each character has a limited amount of energy, and hence using this wisely is a challenge for intelligent game play. Reinforcement learning balances accurate opponent modelling versus creating an opponent that plays well. For instance, the occurrence of deadly skills is far less than those common actions without an energy cost. Since the supervised-learning-based opponent model is able to obtain a higher prediction accuracy than reinforcement-learning-based opponent models, it mainly infers common actions but not the deadly skill. However, owing to the reward is a measuring signal, it can represent the varying level of importance for actions, and make the inference of reinforcement-learning-based opponent model more effective than the supervised-learning-based.

V Conclusion & Future Work

This paper presents RHEAOM, a novel fighting AI framework that utilizes evolutionary strategy and opponent modeling to search the best action sequence for a real-time fighting game. In our work, we propose three variants of RHEAOM, which are RHEAOM-SL, RHEAOM-Q, RHEAOM-PG. Within the aid of opponent model, RHEAOM is able to outperform the state-of-the-art MCTS-based fighting bots. Experiment results suggest that our method can efficiently find the weakness of opponents and select competitive actions for all three characters in the 2018 FTGAIC. Moreover, RHEAOM-PG becomes the runner-up of FTGAIC in 2019 IEEE CoG.

Even though RHEAOM has achieved impressive performance in experiments, it still cannot completely defeat all opponent in the competition. We will consider to introduce a model-based deep reinforcement learning method, instead of using the built-in forward model, to improve the adaption and generalization of the whole learning algorithm. Besides, it is not easy for an opponent model to accurately predict the actions of the opponent because it mainly depends on the predictability of the opponent and is restricted by the real-time constraint. We will give more investigation on this research topic.

Although the results in this paper are all on thr FightingICE platform, the approach of enhancing RHEA with a learned opponent model is generally applicable to any two player game. The only part of the system that is specific to FightingICE is the set of 18 features used as input to the opponent model neural network. We plan to also test the RHEAOM methods on other two-player real-time video games, such as Planet Wars [12].

An interesting and important challenge is to make the approach more general, while still achieving the same degree of very rapid learning. It is worth emphasizing that our method learns the opponent model from scratch after the first round of play, and all this is conducted within the constraints of a real-time tournament.


  • [1] R. D. Gaina, J. Liu, S. M. Lucas, and D. Pérez-Liébana (2017) Analysis of vanilla rolling horizon evolution parameters in general video game playing. In

    European Conference on the Applications of Evolutionary Computation

    pp. 418–434. Cited by: §III-A.
  • [2] R. D. Gaina, S. M. Lucas, and D. Perez-Liebana (2017) Rolling horizon evolution enhancements in general video game playing. In 2017 IEEE Conference on Computational Intelligence and Games, CIG, pp. 88–95. Cited by: §I.
  • [3] S. Ganzfried and T. Sandholm (2011) Game theory-based opponent modeling in large imperfect-information games. In International Conference on Autonomous Agents and Multiagent Systems, Vol. 2, pp. 533–540. Cited by: §II.
  • [4] X. Glorot and Y. Bengio (2010) Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256. Cited by: §IV-A.
  • [5] H. He, J. Boyd-Graber, K. Kwok, and H. Daumé III (2016) Opponent modeling in deep reinforcement learning. In

    International Conference on Machine Learning

    pp. 1804–1813. Cited by: §II.
  • [6] J. Heinrich and D. Silver (2014) Self-play Monte-Carlo tree search in computer poker. In Twenty-Eighth AAAI-14 Conference. Artificial Intelligence, pp. 19––25. Cited by: §I.
  • [7] M. Kim and C. W. Ahn (2018) Hybrid fighting game AI using a genetic algorithm and Monte Carlo tree search. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 129–130. Cited by: 3rd item.
  • [8] M. Kim, J. S. Kim, D. Lee, S. J. Kim, M. Kim, and C. W. Ahn (2019) Integrating agent actions with genetic action sequence method. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 59–60. Cited by: 5th item.
  • [9] M. Kim and K. Kim (2017) Opponent modeling based on action table for MCTS-based fighting game AI. In 2017 IEEE Conference on Computational Intelligence and Games, CIG, Vol. 100, pp. 178–180. Cited by: §II, §III-B.
  • [10] D. P. Kingma and J. Ba (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §IV-A.
  • [11] F. Lu, K. Yamamoto, L. H. Nomura, S. Mizuno, Y. Lee, and R. Thawonmas (2013) Fighting game artificial intelligence competition platform. In IEEE Global Conference on Consumer Electronics, pp. 320–323. Cited by: §I.
  • [12] S. M. Lucas (2018-08) Game AI Research with Fast Planet Wars Variants. In 2018 IEEE Conference on Computational Intelligence and Games (CIG), Vol. , pp. 1–4. External Links: Document, ISSN 2325-4270 Cited by: §V.
  • [13] K. Majchrzak, J. Quadflieg, and G. Rudolph (2015) Advanced dynamic scripting for fighting game AI. in Lecture Notes in Computer Science 9353, pp. 86–99. Cited by: §II.
  • [14] I. Millington (2019) AI for Games. CRC Press. Cited by: §II.
  • [15] D. Perez, S. Samothrakis, S. M. Lucas, and P. Rohlfshagen (2013) Rolling horizon evolution versus tree search for navigation in single-player real-time games. In Proceedings of the Conference on Genetic and Evolutionary Computation, GECCO, pp. 351–358. Cited by: §I, §III-A.
  • [16] D. Perez-Liebana, S. Samothrakis, J. Togelius, T. Schaul, S. M. Lucas, A. Couëtoux, J. Lee, C. Lim, and T. Thompson (2015) The 2014 general video game playing competition. IEEE Transactions on Computational Intelligence and AI in Games 8 (3), pp. 229–243. Cited by: §I.
  • [17] D. Perez-Liebana, S. Samothrakis, J. Togelius, T. Schaul, and S. M. Lucas (2016) General video game AI: Competition, challenges and opportunities. In Thirtieth AAAI Conference on Artificial Intelligence, Cited by: §I.
  • [18] N. Sato, S. Temsiririrkkul, S. Sone, and K. Ikeda (2015) Adaptive fighting game computer player by switching multiple rule-based controllers. In 3rd International Conference on Applied Computing and Information Technology/2nd International Conference on Computational Science and Intelligence, pp. 52–59. Cited by: §II.
  • [19] K. Shao, Z. Tang, Y. Zhu, N. Li, and D. Zhao (2019) A survey of deep reinforcement learning in video games. arXiv preprint arXiv:1912.10944. Cited by: §II.
  • [20] K. Shao, D. Zhao, N. Li, and Y. Zhu (2018) Learning Battles in ViZDoom via Deep Reinforcement Learning. In IEEE Conference on Computational Intelligence and Games, CIG, pp. 1–4. Cited by: §II.
  • [21] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. V. Den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529 (7587), pp. 484–489. Cited by: §I.
  • [22] D. Silver, T. Hubert, J. Schrittwieser, I. Antonoglou, M. Lai, A. Guez, M. Lanctot, L. Sifre, D. Kumaran, T. Graepel, et al. (2018) A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362 (6419), pp. 1140–1144. Cited by: §I.
  • [23] D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. (2017) Mastering the game of Go without human knowledge. Nature 550 (7676), pp. 354–359. Cited by: §I.
  • [24] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour (2000) Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pp. 1057–1063. Cited by: §III-B.
  • [25] Y. Takano, W. Ouyang, S. Ito, T. Harada, and R. Thawonmas (2018) Applying hybrid reward architecture to a fighting hame AI. In IEEE Conference on Computational Intelligence and Games, CIG, pp. 1–4. Cited by: §II.
  • [26] Z. Tang, K. Shao, D. Zhao, and Y. Zhu (2017) Recent progress of deep reinforcement learning: from AlphaGo to AlphaGo Zero. Control Theory & Applications 34 (12), pp. 1529–1546. Cited by: §II.
  • [27] Z. Tang, K. Shao, Y. Zhu, D. Li, D. Zhao, and T. Huang (2018) A review of computational intelligence for StarCraft AI. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence, SSCI, pp. 1167–1173. Cited by: §II.
  • [28] Z. Tang, D. Zhao, Y. Zhu, and P. Guo (2018) Reinforcement Learning for Build-Order Production in StarCraft II. In 2018 Eighth International Conference on Information Science and Technology, ICIST, pp. 153–158. Cited by: §II.
  • [29] L. G. Thuan, D. Logofătu, and C. Badică (2019) A Hybrid Approach for the Fighting Game AI Challenge: Balancing Case Analysis and Monte Carlo Tree Search for the Ultimate Performance in Unknown Environment. In International Conference on Engineering Applications of Neural Networks, pp. 139–150. Cited by: 3rd item.
  • [30] C. J. Watkins and P. Dayan (1992) Q-learning. Machine learning 8 (3-4), pp. 279–292. Cited by: §III-B.
  • [31] G. N. Yannakakis and J. Togelius (2018) Artificial Intelligence and Games. Vol. 2, Springer. Cited by: §II.
  • [32] S. Yoon and K. Kim (2017) Deep Q networks for visual fighting game AI. In 2017 IEEE Conference on Computational Intelligence and Games, CIG, pp. 306–308. Cited by: §II.
  • [33] S. Yoshida, M. Ishihara, T. Miyazaki, Y. Nakagawa, T. Harada, and R. Thawonmas (2016) Application of Monte-Carlo tree search in a fighting game AI . In 2016 IEEE 5th Global Conference on Consumer Electronics. GCCE, Vol. 1, pp. 1–2. Cited by: §II.
  • [34] D. Zhao, K. Shao, Y. Zhu, D. Li, Y. Chen, H. Wang, D. Liu, T. Zhou, and C. Wang (2016) Review of deep reinforcement learning and discussions on the development of computer Go. Control Theory & Applications 33 (6), pp. 701–717. Cited by: §II.