DeepAI
Log In Sign Up

Adversarial Decisions on Complex Dynamical Systems using Game Theory

We apply computational Game Theory to a unification of physics-based models that represent decision-making across a number of agents within both cooperative and competitive processes. Here the competitors try to both positively influence their own returns, while negatively affecting those of their competitors. Modelling these interactions with the so-called Boyd-Kuramoto-Lanchester (BKL) complex dynamical system model yields results that can be applied to business, gaming and security contexts. This paper studies a class of decision problems on the BKL model, where a large set of coupled, switching dynamical systems are analysed using game-theoretic methods. Due to their size, the computational cost of solving these BKL games becomes the dominant factor in the solution process. To resolve this, we introduce a novel Nash Dominant solver, which is both numerically efficient and exact. The performance of this new solution technique is compared to traditional exact solvers, which traverse the entire game tree, as well as to approximate solvers such as Myopic and Monte Carlo Tree Search (MCTS). These techniques are assessed, and used to gain insights into both nonlinear dynamical systems and strategic decision making in adversarial environments.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

01/24/2019

Game theoretical modelling of network/cyber security [Review paper]

Game theory is an established branch of mathematics that offers a rich s...
03/14/2019

The Rock--Paper--Scissors Game

Rock-Paper-Scissors (RPS), a game of cyclic dominance, is not merely a p...
03/04/2019

α-Rank: Multi-Agent Evaluation by Evolution

We introduce α-Rank, a principled evolutionary dynamics methodology, for...
10/21/2019

How to lose at Monte Carlo: a simple dynamical system whose typical statistical behavior is non computable

We consider the simplest non-linear discrete dynamical systems, given by...
08/25/2020

Theory of Deep Q-Learning: A Dynamical Systems Perspective

Deep Q-Learning is an important algorithm, used to solve sequential deci...
05/27/2018

Assessing monotonicity of transfer functions in nonlinear dynamical control systems

When dealing with dynamical systems arising in diverse control systems, ...
10/25/2021

Computational Efficiency in Multivariate Adversarial Risk Analysis Models

In this paper we address the computational feasibility of the class of d...

1 Introduction

In this paper we study through the lens of Game Theory a complex dynamical system that is a unification of physics-originating models but applied to a competitive decision-making context. The paper solves the model using modern computational methods and presents a parameter analysis to guide practical applications. The physics-based model seeks to represent a tension that is inherent to adversarial decision making processes involving multiple agents, between cooperation and competition. The model thus touches upon both cognitive and computational science. In order to study such processes, this paper presents a novel numerical treatment for game-theoretic solutions of large scale simultaneous move adversarial games conducted between rival agents.

In this model, each player is connected to others within their group through a nodal network structure representing agents (or subsystems) aligning with the players goals through a Kuramoto model [Kuramoto1984]. This network model for oscillator synchronisation has been used as the basis of representation of a diverse set of natural, technological and social systems [doerfler2014, kalloniatis2019controlsync, wu2020synchronization]. In this context the model is designed to represent a continuous competitive Perception-Action cycle [Neisser1976] between any two agents, known in some contexts as the Boyd Observe-Orient-Decide-Act (OODA) loop [BoydOODA]. This model has been diversely applied to business, cybersecurity, and military contexts [negash2008business, demazy2018game, andrade2019cognitive]. The representation of the OODA loop through the Kuramoto model has been shown to apply to both competing sets of decision-makers [KALLONIATIS201621, HOLDER201710] and for decision makers acting in isolation [kalloniatis2020HQ]. The Kuramoto model for a single group may be seen as a mathematical sociological model, as seen in applications to opinion dynamics [pluchino2006opinion] for example. The competitive aspect of the two-network variation of the Kuramoto-Sakaguchi system [Sakaguchi1986] naturally lends itself to a game theoretic treatment. This version thus provides a representation of a competitive Command-and-Control (C2) context in a more generic approach than previous physics-based treatments [SONG20135206, SONG2015322]. The model captures two such ‘social’ systems with cooperation sought within, and competition across, each.

Coupling these oscillator models to a Lanchester model [Lanchester1916, MorseKimball-1951, mackay2006lanchester] allows for outcomes of the decision making process to be quantified, such that success in coordinated decision-making results in enhanced resources of one and depletion of the resources of the competitor. The Lanchester model is itself an adaptation of the predator-prey dynamics, or multi-species Lotka-Volterra model. As a representation of growth and decay of entities, these models describe physical processes often obeying conservation laws, with wide application in ecology [BRADSHAW1998107]. The unification of these models, Kuramoto and Lanchester, was first proposed in [Ahern2020], and in this context is called the Boyd-Kuramoto-Lanchester (BKL) dynamical system. In essence, because both models admit treatment in continuous-time differential equations their unification is entirely natural.

Of particular interest is the influence of different network structures on the cohesion and adaptability of players. For this work we deliberately consider an asymmetric arrangement in which one group of players is subject to a hierarchical network structure. The other group has a more organic and interlinked network topology coupling its oscillators. These structures are represented respectively by the Blue and Red players of Figure 1. The interactions between these two groups of players is fixed such that only a subset of the nodes from each directly interacts with those of the opponent group; nodes not connected in this way may be said to play leadership roles within the group.

This model of two players engaged in adversarial decision making under constrained resources rewards internal synchronisation, but also incorporates the potential for adversary driven outcomes that undermine the capacity for coupling. The competitive nature of these components creates a process that is inherently Game-Theoretic in nature [Ahern2020], and exists alongside other recent works tying dynamical systems to Game-Theory [li2020exploring]. Due to their successful application to multiple adversarial environments, the mathematical and conceptual framework of Security Games [alpcan-book] is applied to two-player adversarial BKL games. When the outcome of these systems are determined by a multi-stage decision making process, these games present an as yet unexplored challenge, in terms of both their analysis and the development of appropriate solution strategies under computational constraints. As such, particular focus is placed upon both establishing the theoretical basis for such a multidisciplinary framework, and developing and implementing computational tools that are suitable for such a model.

Figure 1: Conceptual diagram of the Boyd-Kuramoto-Lanchester (BKL) games. Nodes represent agents, with the player corresponding to the aggregate set of nodes. Solid and dashed links respectively represent networked connections between agents and adversaries , and the different shades of blue indicate relative state of synchronisation of agents in the hierarchical structure.

The contributions of this paper include:

  • Constructing a novel union of dynamical systems and game theory through the BKL model of networked oscillators.

  • Introducing novel numerical algorithms for solving game theoretic problems, with a focus upon numerical scaling and large game trees.

  • Detailed numerical analysis of the outcomes of game theoretic dynamical systems, with a particular focus upon understanding asymmetric adversarial decision making processes, which provides insights for practical applications.

To support this, the paper begins by introducing the dynamical systems model for BKL dynamics. Following this, Section 3 introduces a specific game-theoretic formulation. To facilitate the solution of such games, a range of computational techniques to solve the discrete dynamic games is presented in Section 4. The behaviour of the game solutions to various parameters under different scenarios is discussed in Section 6. The paper concludes with remarks and a discussion on future research directions.

2 Boyd-Kuramoto-Lanchester Complex Dynamical Systems

In the following we present first the deterministic two-network Kuramoto-Sakaguchi [Sakaguchi1986] oscillator model, and discuss how it is mapped to an adversarial context; at this level the representation is called the ‘Boyd-Kuramoto’ (BK) Model as it captures competing OODA loop cycles as a continuous process in the phase oscillator at the heart of the formulation. Next, we incorporate into this the well-known Lanchester model to provide the combined BKL Model. This summarises the original proposal in [Ahern2020].

2.1 Boyd-Kuramoto Dynamical Model

Let and be the respective sets of Blue and Red Agents. Each Blue Agent has a frequency and phase , and similarly each Red Agent has frequency and phase . The Blue Agents are connected to each other through a symmetric adjacency matrix ; while the Red Agents via the matrix . The matrix represents the unidirectional external links from Blue to Red Agents. While asymmetric interactions are available, for this work we impose that the interactions from Red to Blue are simply the transpose of . Figure 1 visualises one possible configuration, where common shades of blue (for the hierarchical group of players) indicate agents close in synchronisation to each other. The quantities , , , and , are respective coupling constants for Blue and Red internally and for Blue to Red and vice versa. The resulting Boyd-Kuramoto model is inherently nonlinear, and admits complex and chaotic dynamics that can be derived through (typically numerical) solutions of

(1)

where is the transpose operator, and and are the phase lags (frustrations) [KALLONIATIS201621]. These two lags capture the essence of Boyd’s proposal that advantage is sought by one side over the other insofar as the coupled dynamics influence the realisation of one side being ahead of the other by the desired amount: for Blue, and for Red. Whether collectively the intended ‘aheadness’ of one or the other is achieved depends on the evolution of the non-linear dynamics.

2.2 Boyd-Kuramoto-Lanchester Dynamical Model

To extend the set of admitted dynamics, a Lanchester model of adversarial interactions can be incorporated within the model, in order to quantify the implications of the decision making processes. The combined Boyd-Kuramoto-Lanchester (BKL) model is immediately applicable to competitive decisions as it captures the complex cyclic decision processes and their effects on adversarial populations of networked heterogeneous agents, in a manner that builds complexity through the aggregated model dynamics.

To this point the Boyd-Kuramoto decision making is detached from any outcomes as might be realised in the physical state of the entities. Representing the outcomes of the competitive process can be achieved by coupling the Boyd-Kuramoto equations to a larger dynamical system. Some options for these include Colonel Blotto Games [Roberson06], Volley/Salvo models [Hughes95], and the Lanchester model [Lanchester1916]. Of these, the Lanchester model holds particular interest, due to well understood competitive properties and applicability to Operations Research. For this the resources—or force strengths—of players and are quantified by

(2)

where and are relative measures of adversarial effectiveness between the respective agent populations. When are constant the Lanchester equations are integrable and admit a unique solution.

In an adversarial environment it is reasonable to expect that each is no longer strictly constant, but rather exhibits a dependence upon the effectiveness of the made decisions. As such, the full BKL model takes the form

(3)

with populations thresholded to prevent physically infeasible negative populations. This equation is coupled to (2.1) through , and and correspond to the cardinalities of the Blue and Red agent sets. Note that this model may be called a global model, where the resources of the two sides are homogeneous. In [Ahern2020] a heterogeneous form of the model is also given, where the resources may also be structured through network parameters using the generalisation of the Lanchester model in [Kalloniatis2020NetLanch]. We do not treat this model here in this first application of computational game Theory to such a system.

3 A Dynamic Game Approach to Competitive Decisions on Complex Systems

In the above model, one may derive thresholds or optima for decisions by one side assuming fixed parameters for the other, as in [KALLONIATIS201621]. The decisions here are about choices of one or more of network structure, couplings, frequency lay-down or degree of aheadness. The fact that the competitor has a say in the outcome (success of those choices) means that a game-theoretic treatment is essential. Our work follows a previously developed framework [basargame]

, in which the BKL engagement can be classified as a two-player, non-zero sum, strategic game. Within this context, the players control their own sets of networked or connected populations of agents

, for and respectively, along with their corresponding adjacency matrices , representing the underlying connection graphs, a conceptual representation of which is shown in Figure 1. These graphs play a significant role within both BK and BKL models as captured by the dynamic equations (2.1) and (3), respectively.

The actions of the players can be considered as control or input variables of the BK and BKL models. As a specific choice, we assume that the players decide on their strategic goals of leading or lagging targets and respectively, representing their desired position in the Boyd (OODA) cycle, as described in the BK and BKL equations (2.1) and (3). At discrete decision points in time , which subdivide the overall engagement time , the BKL equations are numerically integrated over a finite time horizon , where

is long enough to allow for the system dynamics to meaningfully evolve over the decision window. Over this time horizon, the players decide on their actions in terms of the strategy vectors

(4)

Over the finite set of time steps , , corresponding to stages in the time interval . The resulting game is formally defined by the tuple where are the utilities of the Blue and Red players after actions have occurred, and represents the set of players.

Each round of the game corresponds to a level of the game tree (extensive form) starting from the root node on top. Under the assumption that the choice of is sampled from a discrete action-space—which allows the game to be considered in a computationally tractable manner—each level of the game then is a static bi-matrix game with utility values and players actions dictated by the underlying dynamic BKL model. This is not a repeated game since the underlying game state changes with each round or as we go down the tree level by level as a result of BKL dynamics.

As the game evolves, the game tree reaches a terminal state when either a fixed time point has been reached; or that one player breaches a termination condition. This termination condition can be flexibly defined, but in the BKL context typically would be that a player depleted in resources to the degree that they no longer have the capability to effectively participate within the game. At the games end state the outcome is quantified by a pair of utility functions measuring the final balance of resources

(5)

The game-theoretic model of the players’ dynamics assumes that the agents are rational decision makers who are choosing strategies and taking actions that will maximise their own utilities, in light of the predicted and observed behaviour of their opponent. Such decisions also must consider the order of play, which can be either simultaneous or sequential, with the latter leading to a ‘Stackelberg‘ (or ‘Leader-Follower’) game structure. We focus on each player taking actions concurrently, in which neither player has an information advantage over the other at each decision point. Such a game structure is particularly well suited to high tempo decision making environments similar to the original military context of the OODA loop. It is worth noting that players having this information does not necessarily mean that they have computing power to calculate the entire game tree, in other words all possible outcomes of the game. This combinatorial complexity distinguishes the game at hand from classical full information games [basargame].

We use the deterministic pure strategy classical Nash Equilibrium (NE) as the solution concept to explore the behaviour of the agents and their optimal behaviour within the competitive environment. Formally, the NE is the set of player strategies (and associated utilities) where no player gains deviating from their strategy, when all other players also follow their own NE. It can corresponds to fixed point and the intersection point of players best responses [basargame]

. It is worth noting that bi-matrix games that are solved at each level always have a solution in mixed strategies, corresponding to a probability distribution over the actions (pure strategies)

[basargame]. The choice of pure strategies—in contrast to probabilistic mixed strategies—reflects the low likelihood of repeated replay for BKL scenarios of interest.

However, pure-strategy NE may not exist in the class of games considered here, and as such it is natural to consider the security strategies of players, which ensure a minimum performance. Also known as minmax and maxmin strategies, these strategies allow each player to establish a worst-case bound on minimum outcome [gtessentialsbook]. However, in the absence of a NE solution, these can be overly conservative, which has led to alternative solution concepts such as regret minimisation. Another related solution concept is the NE, which is an approximation to NE solution [gtessentialsbook].

4 Strategic Solution Techniques

Solving a dynamic game formulated as a BKL complex system, and hence obtaining best response strategies of players, involves both constructing Game Theoretic solutions to the overall game, and the numerical solutions of the BKL that the Game Theoretic results depend upon. As the BKL ordinary differential equations (ODEs) are constant coefficient, coupled initial value problems with trigonometric nonlinearities, solving these differential equations is a relatively straightforward process and is standard in studies of the Kuramoto model for complex networks, however, inherently this process becomes a hurdle as the size of the game tree increases.

In order to consider large-scale decision processes while being cognisant of computational constraints, we consider approximate solutions of the BKL equations using the Dormand-Price Runge-Kutta method [dormand1996], using a coarse fixed step-size. While the coarse step-size results in inaccurate results in terms of player utilities, our investigation has shown consistency between the optimal player strategies in the cases of more accurate and approximate solvers, even when the player utilities deviate from each other. Making this change significantly decreases the overall computational cost, without influencing the relative cost across solution methodologies, allowing the scaling properties of each algorithm to be assessed.

while Exploring Tree do        while Depth Terminal do              Identify available actions;              Select action and save action to path;                    Solve game for path;    

    Backpropagate information;

      
Algorithm 1 Generalised Process for Tree Exploration

4.1 Full Competitive Decision (Game) Tree Solver

The Full Tree solver operates by constructing an extensive form representation of the competitive decision process, comprising all potential choices of the action parameters and at each decision point. Performing a depth first search across all terminal leaf states, and then backpropagating the NE solution at each depth recursively from the terminal depth to the root node, yields a NE utility which represents the utility at the game’s terminal state [gtessentialsbook], that corresponds to an exact solution to the game tree. This process directly follows Algorithm 1, where backpropagation only occurs when all potential action pairs from a point in the tree have been explored. When this condition has been met, and solved for, at the root node of the game tree, the game has been solved.

A saddle point, at which the exact NE would sit, is not guaranteed to exist for two-player simultaneous-move zero-sum games. Therefore the NE is approximated by adopting the security (or max-min and min-max) strategies of the players [gtessentialsbook] as the solution concept by

(6)

where is a matrix of the recorded utilities corresponding to each unique decision pairing. The components and correspond to the decisions and decision sets for each player at the currently explored component of the game–tree.

While any game tree corresponding to a zero-sum simultaneous-move game can be exactly solved in this manner, the computational burden of resolving all possible game states can make this process intractable. This limitation stems from the polynomial growth of the size of the game tree as the number of action states and decision points are increased. If and represent the size of the action space for each player, and the number of decision points, then the size of the game, and the ensuing computational cost can be shown to be . For future reference, we shall impose that , allowing the cost to be expressed as being .

In the context of BKL games, the problematic nature of this growth is not a direct consequence of the size of the game trees themselves, but rather how many times the ODEs need to be solved, as it is this part of the process that dominates the computational cost. As a consequence of this, the growth of computational complexity with when constructing a solution using the Full Tree approach is infeasible when the game involves large decision spaces, or long time horizons involving multiple decision points.

4.2 Nash Dominant Game Pruning

In sequential move games, where player decisions follow one another in a sequential way, large decision trees can be solved by employing Alpha-Beta pruning, which is frequently employed to reduce the portion of the game tree that needs to be explored to reach a solution. In the best case, Alpha-Beta pruning can reduce the computational cost from

to , by excising subgame branches that can be proven to not contain the NE, without needing to explore the subgame in question. Importantly, Alpha-Beta pruning provably preserves the game tree, thus it is classified as an exact solver, resulting in the same NE from solving over the full tree.

While Alpha-Beta pruning is a powerful approach for sequential move games, it can not be applied to simultaneous move games. As such, we present Nash Dominant Game Pruning, a novel solution concept. Similar to Full Tree, the game is explored through a recursive process with utilities calculated at the leaves and then back-propagated up the tree. In contrast to Full Tree, Nash Dominant identifies action pairs that are strategically dominated—that will not correspond to the equilibrium state—and truncates all subsequent states within the game tree.

This process is performed by describing the action pairs as a matrix game and identifying if any recorded utility within an incompletely explored column is smaller than the largest column minima of any completely explored column. If the observed value is smaller than the largest column minima then it follows by Equation (6) that the NE cannot exist inside that column. As such, the subgames that stem from those points in decision space can be excluded. The totality of this process only affects the process of selecting available actions within Algorithm 1, however for completeness Algorithm 2 presents all steps for Nash Dominant Game Pruning. For this, evaluating the path refers to finding the terminal state utility from the state of the two players, and scoring the result involves solving Equation (6).

Data: NashDominant(P, , , , D) for P being the historic path, representing the decision index from , and D is the Depth into the game tree. Path = Path + (i, j); if Depth Terminal Depth then        while Any element of the decision space is unexplored do              for All (i,j) action pairs in col that have not been explored do                    results(i,j) = NashDominant(P, i, j, , D + 1);                    if results(i, j)  then                          Set all remaining elements in the column to results(i,j)                                if min of col  then                    = min of column             Increment row, increment column;       return Score(results) else        return Evaluate(Path) Algorithm 2 Psuedocode algorithm for the Nash Dominant solver.

This process provably yields the NE solution if it exists, and the security strategy if it does not. Under the best case scenario, the algorithm would only need to solve subgame states from each game state, with complexity. At worst case this scheme scales with .

4.3 Myopic Approximation

One advantage of games like the BKL model is that the utility corresponding to a state can be exactly calculated, even if it does not correspond to a terminal node. This is a consequence of the game being scored in terms of the changes in Lanchester resources or force strengths, rather than a binary win-loss state. As such, an approximate solution for large game trees can be found by evaluating the utility resulting from all action pairings at the first depth, and then selecting the equilibrium solution—effectively repeated calls of Algorithm 1 for a game depth of . At all subsequent depths, the tree is pruned so only actions stemming from the equilibrium solution at the previous depth are considered. Repeating this process to the terminal depth of the tree results in what is known as the Myopic (or Greedy) approximate

solution. The validity of this approach is built upon the premise that the influence of early decisions likely dominates the NE, due to the dependence of system outputs on the Lanchester exponential decay. Under this premise, Myopic search can yield an accurate and numerically efficient estimate of the NE through a limited breadth-first search scaling with

. However, the nonlinear nature of the BKL system indicates the potential for the system to exhibit chaotic dynamics, contradicting the assumptions of Myopic search.

4.4 Monte Carlo Tree Search (MCTS)

Monte Carlo techniques are well known for their ability to numerically approximate solutions through partial exploration of large and complex spaces. In the case of tree structures, Monte Carlo Tree Search (MCTS) encompasses a family of techniques for estimating the NE of sequential or simultaneous move games. The utility of MCTS for complex games has been demonstrated in a number of board games [gelly2012], as well as more general scenarios for decision making in complex environments [arneson2010, browne2012, lanctot2013, chen2015decentralized, haeri2017virtual].

Constructing a solution to a game with MCTS requires striking a balance between exploitation of the game tree, in which visiting new subgame regions are prioritised, and exploitation of previously visited subgame regions, in order to improve knowledge about their subgame states. MCTS provides a framework for iteratively exploring game trees, with the aim of reaching an equivalence class with the full game tree. As the MCTS procedure explores the game-tree, the observed game structure asymptotically converges to that of the full game. Under the restriction that the selection approach is -Hannan consistent, MCTS algorithms converge to an approximate NE of the extensive-form game [lisy2013]. Our investigation is based upon the Decoupled UCT (DUCT) algorithm, which balances exploration and exploitation of the game tree in a fashion inspired by multi-armed bandits [kocsis2006]. After each player has visited each possible action once, subsequent actions chosen by

(7)

Here is the sum of rewards for the subgame corresponding to the action from state , corresponding to the position in the partially explored game tree, and both and represent the number of visits of the current game state, and of each of its subgame actions respectively. The function is a departure from previous implementation of DUCT, and serves to map to within , based upon the largest and smallest rewards that have been observed to that point within the MCTS exploration process, across all visited leaf nodes, in order to make MCTS more appropriate for games with a scored output, rather than binary win-loss states. Thus the first term of Equation 7 rewards exploiting areas of the game tree that are known to be of high utility for each player, and the second biases the search process towards actions for which less knowledge about the terminal states is available.

Data: MCTS(P, node, , , , D) for P being the historic path, representing the decision index from , and D is the Depth into the game tree. Path = Path + (i, j); if node is a terminal state then        return the node evaluated at that point;        if node is not terminal and Expansion criteria is met then        score = MCTS(P, node, , , , D);        Update(node);        return score;        = Selectively Exploit node Update(node); return score Algorithm 3 MCTS DUCT approximate game solver psuedocode algorithm.

Algorithm 3 utilises backpropagation, in a similar fashion to the Full Tree and Nash Dominant solvers, to pass leaf information up the tree. However, due to memory concerns with large trees, the update procedure only adds to the average reward in each of the parent game states of a leaf node, as stored in

. Due to both this and MCTS’s lack of guarantees for fully exploring subgames directly solving the NE is not possible, and would require an expensive leaf-to-root approach. Instead, the NE can be approximated following a top-down path by search heuristics

[coulom2006, chaslot2008, schadd2009]. While many such approaches exist, our approach selects the action with the greatest expected utility by solving Equation 7 for .

5 Numerical Analysis of BKL Game Solutions

Figure 2: Specific network structures for the Blue (Hierarchical, left) and Red (Random, right) players, as used in all the following experiments.

For a game where each player has the choice of four actions across , Figure 5 outlines the solved equilibrium dynamics. In this example, each player is able to re-position at , and follows the parameters of Table 1 and force structures of Figure 2. The asymmetry between the players is deliberate, and has been chosen to produce a closely balanced competitive environment between the two players with distinct organisational structures. As will be discussed later in this chapter, the interplay between the structure of the force and its strength relative to their opponent can be assessed by perturbing the parameters of this balanced environment.

Parameter Value
Initial population (B and R) &
mean()
mean()
Table 1: BKL parameter space used in Section 5, reflecting recent practical examples [KALLONIATIS201621, HOLDER201710]. Blue and Red players respectively representing a hierarchical and peer-to-peer randomised decision making structures.

Simulations following these parameters demonstrate that the initial force strength advantage is preserved across time (Figure (a)a), reflecting that the population asymmetry balances out the structural differences between the two players internal organisation, allowing Red to remain competitive with Blue. However the difference between force strengths across each decision point is not conserved: there are distinct deviations at the decision points, seen in panel (b) so that at each such point Blue and Red are positioning themselves to optimise their final utility. These choices reflect the changes in each players positioning in , as observed within Figure (b)b, which shows Reds need to re-position their behaviour more frequently. From this we infer that at each decision point after the opening state Red seeks to position itself ahead in phase of Blue. However, through its initial force advantage Blue is able maintain its relative advantage to Red without this phase advantage.

(a) Population difference
(b) (Blue) and (Red), radius increasing with time
Figure 5: Force strength difference (a) and actions in (b) for the Red and Blue players (as per Figure 1) when following the NE. Game has four potential actions at each of four decision points, with each of the decision points being denoted by the vertical green lines in panel (a). Parameters according to Table 1

The nature of these outcomes are intrinssically linked to the game-theoretic solution of the game. Figure 11 demonstrates the evolution of the game if the Blue player maintains their Nash Equilibrium strategy, while the Red player deviates from the equilibrium strategy by selecting at each of the decision points. This change induces a deleterious outcome for Red relative to the equilibrium utility. Under the precepts of a Nash Equilibrium, the behaviour of the Blue player is not the optimal response to Red’s sub-optimal (“irrational”) decisions, but rather that Blue cannot be disadvantaged. The difference between the dynamics is persistently greater than , which indicates that no phase locking occurs [KALLONIATIS201621].

(a) Population Difference
(b) (Blue) and (Red), radius increasing with time
Figure 8: Force strength difference (a), and actions in (b) for the Red and Blue players when following the NE for and a tree with four actions for each player at each of the four decision points, with decision points indicated by the vertical green lines in panel (a). Parameters follow Table 1.
(a) Force strengths
(b) Population Difference
Figure 11: Revised player outcomes for the Red and Blue players (total force for each side in (a) and difference in (b)) when only the Blue player follows the NE strategy, based upon Figure 5.

The influence of structural changes can be considered by setting the initial populations of both players to . The equilibrium results for this scenario are presented within Figure 8, with Red demonstrating an improving advantage across the advantage, despite several fluctuations in the penultimate stage where Blue temporarily remains stable in force strength. In the phases of panel (c), Red seeks a phase advantage in the initial and final stages with phase difference in the intermediate region. In this latter case, again, there appears to be periodic dynamics without the presence of any phase locking. The contrast between this outcome, and the case for demonstrates that the game theoretic treatment introduces switching dynamics into the dynamical system of the BKL model. In some stages the Kuramoto phase dynamics may be steady state (typically when the sought phase advantage is less than ), whereas others exhibit dynamical system characteristics.

5.1 Computational Performance of Exact and Approximate Solvers

The performance of solvers as the problem domain scales is of crucial importance. While such analysis is common for traditional numerical solutions of dynamical systems [cullen2019fast, doostan2009least, nguyen2008multiscale], the practice is less established when considering game-theoretic numerical solution concepts, with only basic examples considered in the literature [matsumoto2010evaluation]. The infancy of such analysis is a product of the types of games being studied, which are either small enough that considerations of accuracy dominate concerns of computational cost (as measured in terms of calculation time); or that the games become so large that the developed solution methodologies are heavily optimised to the specific game concept, producing scaling results that are not extensible.

To expand upon the extant work, the performance of the solvers of Section 4 were tested for a range of game tree sizes, with trees defined for depths between and , and action spaces for each player between and . The results of this testing—as seen in Figure 12—demonstrate the changes in the rate of growth of computational cost as a function of the tree size. The fact that the approximate methods produce uniformly lower computational costs than the exact methods is unsurprising, however it is important to emphasise that as the size of the game tree increases, the difference between our exact Nash Dominant method and MCTS rapidly diminishes, even though MCTS is only visiting less than of leaf nodes.

In fact, the computational cost of the Nash Dominant solver scaled with (between the theoretical upper and lower bound scalings), with the cost of MCTS exhibiting a similar scaling of . It must be noted that while MCTS’s ability to repeatedly visit subgame regions of a tree should produce a computational cost that is less than the number of iterations, any savings here are balanced out by the additional computational cost incurred by managing the MCTS process. The Myopic solver also conforms with the theoretical scaling properties, scaling with (orange line). Based upon these results for smaller game trees, the performance advantages of the approximate solvers are not strong enough to justify their use relative to the exact Nash Dominant solver, due to approximation overheads. In larger games, Nash Dominant has the potential to notably decrease the overhead of solving the game tree relative to the Full Tree solve, while still producing an exact solution.

Figure 12: Scaling of computational cost as a function of the size of the game tree, when solving with the Full Tree (Blue), Nash Dominant (Green), MCTS performing iterations equivalent to of leaf nodes (Red), and Myopic (Orange) methods. Calculations are based upon the average of runs.

Across the entire test space the Nash Dominant solver produces results that matched the exact Full Tree solutions, validating its status as an exact solver. Of the approximate solvers, Myopic produced an average error of only

, with a standard deviation of

, while the error from MCTS was (with a standard deviation of ). The strong relative performance of the Myopic solver across all tested tree morphologies was surprising, given that MCTS explores significantly more terminal leaves. That this is possible is a product of the structure of the BKL game itself. As the Lanchester model introduces quasi-exponential decay to the system dynamics, results at the leaves of the game tree are primarily determined by the behaviour at the initial decision points, in a fashion that favours Myopic exploration.

6 Parameter Analysis of Competitive Decisions to Guide Practical Applications

Having now characterised particular regimes of behaviour of the BKL model and the computational performance of the Game Theory solver, we now test the behaviour of the model across a range of parameter values. The aim here is to see transitions in the parameter space through a heatmap in Lanchester outcomes, as originally used in [Kalloniatis2020NetLanch], from one-side having advantage to the other side, within an equilibrium solution and subject to the constraints that each side brings into the scenario (size of initial resources and their respective network C2 design, for example). Treating this as a larger meta-game, the designer of a system may detect then where risks are incurred given their design choices.

Due to their analytically determined import for phase locking, the coupling parameters and are of particular interest, and as such we will explore distinct choices of each of these parameters. The game in question will involve actions for each of at decision points, yielding a game with leaf nodes. In order to better understand the nature of these games, the parameters of Table 1 are modified to decrease and to . This change decreases the rate of attrition suffered by the resources of each through the game, to ensure that all points within the parameter space yield games which do not terminate early, and involve decisions being taken at all four decision points. Exploring and also fits the meta-game context, as they determine how tightly the agents of the player will interact within their respective decision making structures that reflects the difficulty of organisations to change how they coordinate in the midst of an adversarial engagement.

6.1 Blue With An Initial Numerical Advantage

For the case where the initial resources are , Figure 16 explores the constructed solution space as per the Nash Dominant, Myopic and MCTS solvers. We emphasise that a red colour does not indicate defeat for Blue for which a negative value of its utility would be required, but rather the colour reflects the degree attrition; and that the heat maps correspond not to fixed , but rather the equilibrium actions at each pairing. Across all solution concepts, the evolution of the game state provides a broad preservation of this advantage across the tested range of game states, with an overall range of admitted equilibrium utilities in . The player who gains advantage over the course of the game, by either increasing (for Blue) or decreasing (for Red) the overall utility relative to the initial equilibrium state of , is determined by the relative balance of and .

At face value the plot is consistent with the basic intuition that stronger relative internal coupling is beneficial. Thus, when , the admitted NE state favours Red, with the utility reaching a peak when . Where Blue is almost uniformly favoured, with the exception of a small band of isolated Red favoured results at , and a general trend towards Red favourable outcomes as . While there is a monotonic increase in utility for Red with respect to increases in over the range considered, the same cannot be said for the Blue response to . Instead the numerically superior Blue player exhibits a ‘sweet spot’ at , with further increases to producing diminishing results across all choices of . This is consistent with previous work [Ahern2020], as excessive internal coupling in relation to the internal structure and other variables leads to a ‘rigidity’ in the system. A similar point of diminished returns exists also for Red beyond the range of considered here, where the broader range is a consequence of the higher connectivity of Red compared to the hierarchical structure of Blue.

While these observed behaviours are driven by the players attempting to find an equilibrium solution, the dynamics are still tied to the underlying dynamical equations. As per Equation 2.1, increasing (relative to , which is fixed at for these experiments) increases the importance of spreads in to the overall derivative , and thus, in Blue’s positioning against Red. Increasing this component of Equation 2.1 allows for the Blue player to more precisely tune its own evolution, in a fashion that is only weakly coupled to the behaviour of Red, allowing the Blue player to theoretically eke out greater advantages in terms of its organisational positioning, and, in turn, the overall utility of the game. There is a symmetrical behaviour in Red, however the differences in the underlying force organisations—hierarchical for Blue, and unstructured for Red—dictate the asymmetry in the observed responses to changing . Considering changes in these parameters as part of a larger meta-game in turn gives that the meta-game itself must also have an equilibrium state. In this example, this occurs at , yielding an overall NE utility of . This corresponds to Blue retaining a numerical advantage over the time period of the engagement.

(a) Nash Dominant (Exact)
(b) Myopic (Approx.)
(c) MCTS (Approx.)
Figure 16: Exploring the solution spaces across the solvers for Here red colours denote states more favourable to the Red player than the initial population difference, with the same for blue colours and the opponent.

Both the Myopic (b) and MCTS (c) solvers accurately capture the dynamics exhibited by the exact solution (a), with errors consistent with Section 5. While under visual inspection both approximate solvers broadly replicate the dynamics of the exact solution, the MCTS solution exhibits the smallest absolute error, with a mean, max, and standard deviation of the absolute errors of , as compared to from the Myopic solver. The differences between the solution methodologies can primarily be seen at the extremum of and . The Myopic solver consistently overestimates the values in the regions where the equilibrium is at its largest, although it does accurately capture the location of the best response solutions for each player (the location of the largest Red and Blue favoured scores). In contrast, while the MCTS solver is slightly more accurate overall, it fails to confidently capture the best response solutions for each player, although it does still capture the equilibrium of the meta-game.

The correspondence between the Nash Dominant and Myopic solutions is due to the tested position in parameter space is heavily dominated by earlier decisions in the parameter space. This is a consequence of the Lanchester models quasi-exponential resource decay, with the end-state Nash Equilibrium dominated by the decisions at the earliest game states. The Myopic solver also outperforms as the action space for each player minimally changes as they move deeper into the game tree. As such, there’s no incentive for players to make sub–optimal decisions in the early game states—which heavily influence the overall evolution of the equilibrium—in order to open up parts of the game tree that are more favourable as the game progresses. We hypothesise that extending the game to one where actions at one decision point influence the available action space at the next decision point would lead to the Myopic solver to under perform relative to MCTS.

Considering reveals broad structural similarities to the prior case. The primary change is that the final utilities are uniformly lower. Notably, the region of space where Blue improves upon their starting position (relative to Red) has decreased in both area and peak magnitude. The location of this peak magnitude, the point of best response for Blue, has also shifted slightly to the left, to weaker coupling, compared to Fig.16 although discerning the exact nature of the shift is complicated by the discretisation. For brevity, we do not show a plot of this case.

(a) Nash Dominant (Exact)
(b) Myopic (Approx.)
(c) MCTS (Approx.)
Figure 20: Exploring the solution spaces across the solvers for Here red colours denote states more favourable to the Red player than the initial population difference, with the same for blue colours and the opponent.

6.2 Blue and Red Under Initial Parity

In a parity situation where both populations are initially , Figure 20 shows that instead of the clear distinction between blue-dominant and red-dominant regions in response to the space, the Nash Equilibria without regions of local homogeneity, in which small changes in or can lead to significant differences in the utility. While increasing still does, in general, lead to increases in utility for the Red player in this case, there are small isolated regions as which are more favourable to the Blue player, relative to the surrounding positions in parameter space. The presence of these isolated regions is driven by the individual solution profiles in the parity case exhibiting more chaotic solution dynamics, and a broader range of utilities within an individual game. This also drives the greater range in equilibrium solutions exhibited within Figure 20.

While it would be expected that further increases to would increase the proportion of the domain in which Red is favoured, the observed changes to the equilibrium outcomes are not uniform. Notably, the ‘sweet spot’ for Blue is now lower in coupling value and weaker in utility: through increases in Red’s initial force over the values , the maximal point of utility for Blue has steadily decreased, spanning values of . That these systems exhibit sensitivity to the initial conditions is a hallmark of the underpinning chaotic dynamical system. This equal initial resource parity scenario creates the greatest sensitivity to other parameters where the decision making process—as reflected through the phase dynamics—plays the dominant role. This is reflected in the time-dependent view of Figure 8, which corresponds to the centre of the heatmap in Figure 20, where Red’s phase advantage is the factor that leads to its superior resource strength at the end of the dynamics.

These dynamics demonstrate why the equal coupling for Figure 8 leads to Blue defeat—most of the heatmap leads to such outcomes as a consequence of Blue’s less favourable network structure. That the optimal responsecan be influenced by the initial relative force strengths accords with known domains in which there is interplay between competitive organisations. As alluded earlier, Blue is persistently at a disadvantage through its hierarchical structure, gaining advantage only through initial superior resource and applying tighter internal effort in its decision making.

6.3 Analysis of Solution Concepts

To quantitatively assess the overall performance of the approximate solvers relative to the exact Nash Dominant Solutions, Table 2 considers the average absolute error, the normalised average absolute error—when normalised against the range of solutions seen over and standard deviation of error across each of the tested scenarios. While the Myopic solution concept under-performs relative to MCTS when , increasing yields a notable deterioration in the performance of the more computationally expensive MCTS solution concept. Even when the number of iterations is increased to of the total number of leaf nodes, the MCTS average absolute error only decreases by , producing results that are still inferior to the Myopic solver.

Solver Mean Normalised Mean Std Norm. Std
MCTS
Myopic
MCTS
Myopic
MCTS
Myopic
Table 2: Mean and Standard Deviation of the absolute errors for final Blue utility values, and their normalised equivalents for all tested cases. Normalisation performed by dividing by the range of equilibrium states admitted across all solutions for and respectively.

The divergence between the and cases—both of which have reasonably well structured meta-game spaces that show monotonic changes—and the more variable case is stark, with the latter scenario exhibiting errors that are up to order of magnitude larger than the equivalent solutions. While the absolute errors for both solvers are approximately equivalent, the Myopic solutions err by overestimating both the most Red and Blue favoured equilibrium states; and MCTS biases the solutions towards the weaker player. That this occurs indicates that even with the corrections we have made to the MCTS algorithm, it is still more suited to games that approach binary win-loss states, and struggles to accurately resolving solutions in the Red favoured game-state.

7 Conclusion

We have shown that physics-based dynamical models may be employed to model competitive decision-making processes. Such a model is made possible through the incorporation of Game Theory, and yields insights that are both intuitively reasonable and have real world applications.

When exploring the parameter space of a domain of industrial significance, we uncovered that a player with greater internal connectivity was able to more appropriately position its response to an opposing player. This was true even in the face of a significant numerical disadvantage. For the hierarchically structured player we observed a limit in how far increasing internal coupling improved their position: beyond a certain point increases became counter-productive with unfavourable results against a more agile connected counterpart. This demonstrates that coupling is not simply interchangeable with network connectivity: increased connectivity increases the range of coupling strength over which advantage can be gained over a less connected competitor.

This work also uncovered that as the game moves away from a balanced equilibrium, the outcome of the game transitions away from a smooth response under changes in the parameters. The existence of such behaviours underscores the importance of being able to accurately solve these games in a numerically efficient manner, in order for players to most advantageously position themselves in competition.

In aide of this, we developed a novel exact solver, and tested several established approximate numerical schemes. While the approximate solvers were able to accurately resolve the dynamics when the game was in a balanced state, they failed to accurately resolve imbalanced states. These issues are pronounced for MCTS, which is more suited for win–loss games rather than those with continuous outputs. In contrast to the behaviours of the established approximate solvers, our new Nash Dominant numerical solver was able to efficiently construct exact numerical solutions to these games in a numerically efficient fashion. Future work will apply this solver to the more computationally demanding form of the networked BKL model in [Ahern2020].

Acknowledgment

This research was funded in part by the Commonwealth of Australia through the Modelling Complex Warfighting Strategic Research Initiative of the Defence Science and Technology Group. Dr. Andrew Cullen was with the Department of Electrical and Electronic Engineering, The University of Melbourne during part of this research.

References