Starcraft AI Research Dataset
This paper advocates the exploration of the full state of recorded real-time strategy (RTS) games, by human or robotic players, to discover how to reason about tactics and strategy. We present a dataset of StarCraft games encompassing the most of the games' state (not only player's orders). We explain one of the possible usages of this dataset by clustering armies on their compositions. This reduction of armies compositions to mixtures of Gaussian allow for strategic reasoning at the level of the components. We evaluated this clustering method by predicting the outcomes of battles based on armies compositions' mixtures componentsREAD FULL TEXT VIEW PDF
We present TorchCraft, a library that enables deep learning research on
We define a new concept of "mistake" strategies and actions for
We consider clustering games in which the players are embedded in a netw...
It is known that there are uncoupled learning heuristics leading to Nash...
We study sequential language games in which two players, each with priva...
We propose DefogGAN, a generative approach to the problem of inferring s...
Exploration of the emerging patterns of mutants in a finite wild-type gr...
Starcraft AI Research Dataset
Real-time strategy (RTS) games AI is not yet at a level high enough to compete with trained/skilled human players. Particularly, adaptation to different strategies (of which army composition) and to tactics (army moves) are strong indicators of human-played games [Hagelbäck and Johansson2010]
. So, while micro-management (low level units control) has known tremendous improvements in recent years, the broadest high-level strategic reasoning is not yet an exemplary feature neither of commercial games nor of StarCraft AI competitions’ entries. At best, StarCraft bots have an estimation of the available technology of their opponents and use rules encoding players’ knowledge to adapt their strategy. We believe that better strategic reasoning is a matter of abstracting and combining the low level states at an expressive higher level of reasoning. Our approach will be to learn unsupervised representations of low-level features.
We worked on StarCraft: Brood War, which is a canonical RTS game, as Chess is to board games. It had been around since 1998, has sold 10 millions licenses and was the best competitive RTS for more than a decade. There are 3 factions (Protoss, Terran and Zerg) that are totally different in terms of units, build trees / tech trees (directed acyclic graphs of the buildings and technologies) and thus gameplay styles. StarCraft and most RTS games provide a tool to record game logs into replays that can be re-simulated by the game engine. That is this trace mechanism that we used to download and simulate games of professional gamers and highly skilled international competitors.
This paper is separated in two parts. The first part explains what is in the dataset of StarCraft games that we put together. The second part showcases army composition reduction to a mixture of Gaussian distributions, and give some evaluation of this clustering.
There are several ways to produce strategic abstractions: from using high-level gamers’ vocabulary, and the game rules (build/tech trees), to salient low-level (shallow) features. Other ways include combining low-level and higher-level strategic representation and/or interdependencies between states and sequences.
Case-based reasoning (CBR) approaches often use extensions of build trees as state lattices (and sets of tactics for each state) as for [Aha, Molineaux, and Ponsen2005, Ponsen and Spronck2004] in Wargus. OntanonCBR (OntanonCBR) base their real-time case-based planning (CBP) system on a plan dependency graph which is learned from human demonstration in Wargus. In [Mishra, Ontañón, and Ram2008], they use “situation assessment for plan retrieval” from annotated replays, which recognizes distance to behaviors (a goal and a plan), and selected only the low-level features with the higher information gain. HsiehS08 (HsiehS08) based their work on [Aha, Molineaux, and Ponsen2005] and used StarCraft replays to construct states and building sequences. Strategies are choices of building construction order in their model.
schadd2007opponent (schadd2007opponent) describe opponent modeling through hierarchically structured models of the opponent behavior and they applied their work to the Spring RTS game (Total Annihilation open source clone). UCT (UCT) applied upper confidence bounds on trees (UCT: a Monte-Carlo planning algorithm) to tactical assault planning in Wargus, their tactical abstraction combines units hit points and locations. In [Synnaeve and Bessière2011b], they predict the build trees of the opponent a few buildings before they are built. Another approach is to use the gamers’ vocabulary of strategies (and openings) to abstract even more what strategies represent (a set of states, of sequences and of intentions) as in [Weber and Mateas2009, Synnaeve and Bessière2011a]
. HMMstrat_RTS_AIIDE11 (HMMstrat_RTS_AIIDE11) used an hidden Markov model (HMM) whose states are extracted from (unsupervised) maximum likelihood on a StarCraft dataset. The HMM parameters are learned from unit counts (both buildings and military units) every 30 seconds and “strategies” are the most frequent sequences of the HMM states according to observations.
Few models have incorporated army compositions in their strategy abstractions, except sparsely as an aggregate or boolean existence of unit types. Most strategy abstractions are based on build trees (or tech trees), although a given set of buildings can produce different armies. What we will present here is complementary to these strategic abstractions and should help the military situation assessment.
We downloaded more than 8000 replays to keep 7649 uncorrupted, 1 vs. 1 replays from professional gamers leagues and international tournaments of StarCraft, from specialized websites222http://www.teamliquid.net333http://www.gosugamers.net444http://www.iccup.com. We then ran them using Brood War API555BWAPI http://code.google.com/p/bwapi/ and dumped: units’ positions, regions’ positions, pathfinding distance between regions, resources (every 25 frames), all players’ orders, vision events (when units are seen) and attacks (types, positions, outcomes). Basically, we recorded every BWAPI event, plus interesting states and attacks. The dataset is freely available666http://emotion.inrialpes.fr/people/synnaeve/TLGGICCUP_gosu_data.7z , the source code and a documentation are also provided777http://snippyhollow.github.com/bwrepdump/
Forbus2002 (Forbus2002) have shown the importance of qualitative spatial reasoning, and it would be too space-consuming to dump the ground distance of every position to any other position. For these reasons, we discretized StarCraft maps in two types of regions:
As battles often happens at chokes, we also produced choke-dependent regions (CDR), which are created by doing an additional (distance limited) Voronoi tessellation spawned at chokes. This regions set is
We trigger an attack tracking heuristic when one unit dies and there are at least two military units around. We then update this attack until it ends, recording every unit which took part in the fight. We log the position, participating units and fallen units for each player, the attack type and of course the attacker and the defender. Algorithm1 shows how we detect attacks.
We annotated attacks by four types (but researchers can also produce their own annotations given the state available):
ground attacks, which may use all types of units (and so form the large majority of attacks).
air raids, air attacks, which can use only flying units.
invisible (ground) attacks, which can use only a few specific units in each race (Protoss Dark Templars, Terran Ghosts, Zerg Lurkers).
drop attacks, which need a transport unit (Protoss Shuttle, Terran Dropship, Zerg Overlord with upgrade).
Table 1 shows some metrics about the dataset. Note that the numbers of attacks for a given race have to be divided by (approximatively) two in a given non-mirror match-up. So, there are 7072 Protoss attacks in PvP and there are not 70,089 attacks by Protoss in PvT but about half that.
|number of games||445||2408||2027||461||2107||199|
|number of attacks||7072||70089||40121||16446||42175||2162|
|mean time (frames) / game||32342||37772||39137||37717||35740||23898|
|mean time (minutes) / game||22.46||26.23||27.18||26.19||24.82||16.60|
|actions issued (game engine) / game||24584||33209||31344||26998||29869||21868|
|mean regions / game||19.59||19.88||19.69||19.83||20.21||19.31|
|mean CDR / game||41.58||41.61||41.57||41.44||42.10||40.70|
|mean ground distance999for regions connected by ground, pathfinding aware, in pixels region region||2569||2608||2607||2629||2604||2596|
|mean ground distance101010for choke-dependent regions connected by ground, pathfinding aware, in pixels CDR CDR||2397||2405||2411||2443||2396||2401|
By running the recorded games (replay) through StarCraft, we were able to recreate the full state of the game. Time is always expressed in game frames (24 frames per second). We recorded three types of files:
general data (*.rgd files): records the players’ names, the map’s name, and all information about events like creation (along with morph), destruction, discovery (for one player), change of ownership (special spell/ability), for each units. It also shows attack events (detected by a heuristic, see below) and dumps the current economical situation every 25 frames: mineral, gas, supply (count and total: maxsupply).
order data (*.rod files): records all the orders which are given to the units (individually) like move, harvest, attack unit, the orders positions and their issue time.
location data (*.rld files): records positions of mobile units every 100 frames, and their position in regions and choke-dependent regions if they changed since last measurement. It also stores ground distances (pathfinding-wise) matrices between regions and choke-dependent regions in the header.
From this data, one can recreate most of the state of the game: the map key characteristics (or load the map separately), the economy of all players, their tech (all researches and upgrades), all the buildings and units, along with their orders and their positions.
We will consider units engaged in these attacks as armies and will seek a compact description of armies compositions.
The idea behind armies clustering is to give one “composition” label for each army depending on its composing ratio of the different unit types. Giving a “hard” (unique) label for each army does not work well because armies contain different components of unit types combinations. For instance, a Protoss army can have only a “Zealots+Dragoons” component, but it will often just be one of the components (sometimes the backbone) of the army composition, augmented for instance with “High Templars+Archons”.
Because a hard clustering is not an optimal solution, we used a Gaussian mixture model (GMM), which assumes that an army is a mixture (i.e. weighted sum) of several (Gaussian) components. We present the model in the Bayesian programming framework[Diard, Bessière, and Mazer2003]: we first describe the variables, the decomposition (independence assumptions) and the forms of the distribution. Then, we explain how we identified (learned) the parameters and lay out the question that we will ask this model in the following parts.
, our army clusters/components (). There are units clusters and depends on the race (the mixture components are not the same for Protoss/Terran/Zerg).
(length ), our dimensional unit types () proportions, i.e. . is dependent on the race and is the total number of unit types. For instance, an army with equal numbers of and (and nothing else) is represented as , i.e. if and are the first two components of the vector. So whatever the composition of the army.
For the battles, the armies compositions are independent across battles, and the unit types proportions vector (army composition) is generated by a mixture of Gaussian components and thus depends on .
mixture of Gaussian distributions:
We learned the Gaussian mixture models (GMM) parameters with the expectation-maximization (EM) algorithm on 5 to 15 mixtures with spherical, tied, diagonal and full co-variance matrices, using scikit-learn[Pedregosa et al.2011]. We kept the best scoring models (by varying the number of mixtures) according to the Bayesian information criterion (BIC) [Schwarz1978].
Let , being respectively the different -dimensional means () and the variances (
) of the normal distributions. Initializerandomly, and let
Iterate until convergence (of ):
For the th battle (one army with units ):
In a battle, there are two armies (one for each players), we can thus apply this clustering to both the armies. If we have clusters and unit types, the opponent has clusters and unit types. We introduce and , respectively with the same semantics as and but for the enemy. In a given battle, we observe and , respectively our army composition and the enemy’s army composition. We can ask and .
As StarCraft unit types have strengths and weaknesses against other types, we can learn which clusters should beat other clusters (at equivalent investment) as a probability table. We use Laplace’s law of succession (“add-one smoothing”) by counting and weighting according to battles results (means “ beats ”, i.e. we won against the enemy):
We used the dataset presented in this paper to learn all the parameters and perform the benchmarks (by setting 100 test matches aside and learning on the remaining of the dataset). First, we analyze the posteriors of clustering only one army and then we evaluated the clustering as a mean to predict outcomes of battles.
Figure 1 shows a parallel plot of army compositions. We removed the less frequent unit types to keep only the 8 most important unit types of the PvP match-up, and we display a 8 dimensional representation of the army composition, each vertical axis represents one dimension. Each line (trajectory in this 8 dimensional space) represents an army composition (engaged in a battle) and gives the percentage of each of the unit types. These lines (armies) are colored with their most probable mixture component, which are shown in the rightmost axis. We have 8 clusters (Gaussian mixtures components): this is not related to the 8 unit types used as the number of mixtures was chosen by BIC score. Expert StarCraft players will directly recognize the clusters of typical armies, here are some of them:
Light blue corresponds to the “Reaver Drop” tactical squads, which aims are to transport (with the flying Shuttle) the slow Reaver (zone damage artillery) inside the opponent’s base to cause massive economical damages.
Red corresponds to the “Nony” typical army that is played in PvP (lots of Dragoons, supported by Reaver and Shuttle).
Green corresponds to a High Templar and Archon-heavy army: the gas invested in such high tech units makes it that there are less Dragoons, completed by more Zealots (which cost no gas).
Purple corresponds to Dark Templar (“sneaky”, as Dark Templars are invisible) special tactics (and opening).
Figure 2 showcases the dynamics of clusters components: , for Zerg (vs Protoss) for of 2 minutes. The diagonal components correspond to those which do not change between and (minutes), and so it is normal that they are very high. The other components show the shift between clusters. For instance, the first line seventh column (in (0,6)) square shows a brutal transition from the first component (0) to the seventh (6). This may be the production of Mutalisks111111Mutalisks are flying units which require to unlock several technologies and thus for which player save up for the production while opening their tech tree. from a previously very low-tech army (Zerglings).
We then used the learned table to estimate the outcome of the battle. For that, we used battles with limited disparities (the maximum strength ratio of one army over the other) of 1.1 to 1.5. Note that the army which has the superior forces numbers has more than a linear advantage over their opponent (because of focus firing121212Efficiently micro-managed, an army 1.5 times superior to their opponents can keep much more than one third of the units alive.), so a disparity of 1.5 is very high. For information, there is an average of 5 battles per game at a 1.3 disparity threshold, and the numbers of battles (used) per game increase with the disparity threshold.
We also made up a baseline heuristic, which uses the sum of the values of the units to decide which side should win. If we note the value of a unit, the heuristic computes for each army and predicts that the winner is the one with the biggest score. For the value of a unit we used:
Of course, we recall that a random predictor would predict the result of the battle correctly of the time.
A summary of the main metrics is shown in Table 2, the first line can be read as: for a forces disparity of 1.1, for Protoss vs Protoss (first column),
considering only military units
the heuristic predicts the outcome of the battle correctly 63% of the time.
the probability of a clusters mixture to win against another (), without taking the forces sizes into account, predicts the outcome correctly 54% of the time.
the probability of a clusters mixture to win against another, taking also the forces sizes into account (), predicts the outcome correctly 61% of the time.
considering only all units involved in the battle (military units, plus static defenses and workers): same as above.
Results are given for all match-up (columns) and different forces disparities (lines). The last column sums up the means on all match-ups, with the whole army (military units plus static defenses and workers involved), for the three metrics.
Also, without explicitly labeling clusters, one can apply thresholding to special units (Observers, Arbiters, Defilers…) to generate more specific clusters: we did not put these results here (they include too much expertize/tuning) but they sometimes drastically increase prediction scores, as one Observer can change the course of a battle.
We can see that predicting battle outcomes (even with a high disparity) with “just probabilities” of (without taking the forces into account) gives relevant results as they are always above random predictions. Note that this is a very high level (abstract) view of a battle, we do not consider tactical positions, nor players’ attention, actions, etc. Also, it is better (in average) to consider the heuristic with the composition of the army (“probheuristic”) than to consider the heuristic alone, even for high forces disparity. Our heuristic augmented with the clustering seem to be the best indicator for battle situation assessment. These prediction results with “just prob.”, or the fact that heuristic with tops the heuristic alone, are a proof that the assimilation of armies compositions as Gaussian mixtures of cluster works.
Secondly, and perhaps more importantly, we can view the difference between “just prob.” results and random guessing (50%) as the military efficiency improvement that we can (at least) expect from having the right army composition. Indeed, for small forces disparities (up to 1.1 for instance), the prediction based only on army composition (“just prob.”: 63.2%) is better than the prediction with the baseline heuristic (61.7%). It means that we can expect to win 63.2% of the time (instead of 50%) with an (almost) equal investment if we have the right composition. Also, when we predict 58.5% of the time the accurate result of a battle with disparity up to 1.5 from “just prob.”, this success in prediction is independent of the sizes of the armies. What we predicted is that the player with the better army composition won (and not necessarily the one with more or more expensive units).
We delivered a rich StarCraft dataset which enables the study of tactical and strategic elements of RTS gameplay. Our (successful) previous works on this dataset include learning a tactical model of where and how to attack (both for prediction and decision-making), and the analysis of units movements. We provided the source code of the extracting program (using BWAPI), which can be run on other replays. We proposed and validated an encoding of armies composition which enables efficient situation assessment and strategy adaptation. We believe it can benefit all the current StarCraft AI approaches. Moreover, the probabilistic nature of the model make it deal natively with incomplete information about the opponent’s army.
International Joint Conference of Artificial Intelligence, IJCAI, 40–45. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Scikit-learn: Machine Learning in Python .Journal of Machine Learning Research 12:2825–2830.