Connected Subgraph Defense Games

06/06/2019 ∙ by Eleni C. Akrida, et al. ∙ University of Liverpool 0

We study a security game over a network played between a defender and k attackers. Every attacker chooses, probabilistically, a node of the network to damage. The defender chooses, probabilistically as well, a connected induced subgraph of the network of λ nodes to scan and clean.Each attacker wishes to maximize the probability of escaping her cleaning by the defender. On the other hand, the goal of the defender is to maximize the expected number of attackers that she catches. This game is a generalization of the model from the seminal paper of Mavronicolas et al. "The price of defense" (MFCS'06). We are interested in Nash equilibria (NE) of this game, as well as in characterizing defense-optimal networks which allow for the best equilibrium defense ratio, termed Price of Defense; this is the ratio of k over the expected number of attackers that the defender catches in a NE. We provide characterizations of the NEs of this game and defense-optimal networks. This allows us to show that the NEs of the game coincide independently from the coordination or not of the attackers. In addition, we give an algorithm for computing NEs. Our algorithm requires exponential time in the worst case, but it is polynomial-time for λ constantly close to 1 or n. For the special case of tree-networks, we refine our characterization which allows us to derive a polynomial-time algorithm for deciding whether a tree is defense-optimal and if this is the case it computes a defense-optimal NE. On the other hand, we prove that it is NP-hard to find a best-defense strategy if the tree is not defense-optimal. We complement this negative result with a polynomial-time constant-approximation algorithm that computes solutions that are close to optimal ones for general graphs. Finally, we provide asymptotically (almost) tight bounds for the Price of Defense for any λ.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

With technology becoming a ubiquitous and integral part of our lives, we find ourselves using several different types of “computer” networks. An important issue when dealing with such networks, which are often prone to security breaches [5], is to prevent and monitor unauthorized access and misuse of the network or its accessible resources. Therefore, the study of network security has attracted a lot of attention over the years [17]. Unfortunately, such breaches are often inevitable, since some parts of a large system are expected to have weaknesses that expose them to security attacks; history has indeed shown several successful and highly-publicized such incidents [16]. Therefore, the challenge for someone trying to keep those systems and networks of computers secure is to counteract these attacks as efficiently as possible, once they occur.

To that end, inventing and studying appropriate theoretical models that capture the essence of the problem is an important line of research, ongoing for a few years now [13, 12]. In this work, extending some known models for very simple cases attacks and defenses [10, 11], we introduce and analyze a more general model for a scenario of network attacks and defenses modeling it as a defense game.

The Network Security Game.

We follow the terminology established by the seminal paper of Mavronicolas et al. [11]. We consider a network whose nodes are vulnerable to infection by threats called attackers; think of those as viruses, worms, Trojan horses or eavesdroppers [6] infecting the components of a computer network. Available to the network is a security software (or firewall), called the defender. The defender is only able to “clean” a limited part of the network from threats that occur; the reason for the limited cleaning capacity of the defender may be, for example, the cost of purchasing a global security software. The defender seeks to protect the network as much as possible, and on the other hand, every attacker seeks to increase the likelihood of not being caught. Both the attackers and the defender make individual decisions for their positioning in the network with the aim to maximize their own objectives.

Every attacker targets (and attacks) a node chosen via her own probability distribution over the nodes of the network. The defender cleans a connected induced subgraph of the network with size

, chosen via her own probability distribution over all connected induced subgraphs of the graph with nodes. The attack of a particular attacker is successful unless the node chosen by the attacker is incident to an edge (link) being cleaned by the defender, i.e. to an edge belonging in the induced subgraph chosen by the defender. One could equivalently think of the defender selecting a set of connected nodes to defend, and an attacker is successful if and only if she attacks a node that is not being defended. Since attacks and defenses over a large computer network are self-interested procedures that seek to maximize damage and protection, respectively, it is natural to model this network security scenario as a non-cooperative strategic game on graphs with two kinds of players: attackers, each playing a vertex of the graph, and a single defender playing a connected induced subgraph of the graph. The (expected) payoff of an attacker is the probability that she is not caught by the defender; the (expected) payoff of the defender is the (expected) number of attackers she catches. We are interested in the Nash equilibria [15, 14] associated with this graph theoretic game, where no player can unilaterally improve her (expected) payoff by switching to another probability distribution. We are also interested in understanding and characterizing the networks that allow for a good defense ratio: given a strategy profile, i.e. a combination of strategies for the network entities (attackers and defender), the defense ratio of a network is the ratio of the total number of attackers over the defender’s expected payoff in that strategy profile.

1.1 Our results

In this paper we depart from and significantly extend the line of work of Mavronicolas et al. in their seminal paper [11] on defense games in graphs; we term the type of games we consider CSD games. In our model the defender is more powerful than in [11], since her power is parameterized by the size, , of the defended part of the network. We allow to take values from 1 to , while in [11] only the case where was studied. We study many questions related to CSD games. We extend the notions of defense ratio and defense-optimal graphs for CSD games. In fact, the defense ratio of a given graph and a given strategy profile of the attackers and the defender is the ratio of the number of attackers, , over the defender’s expected payoff (the number of attackers she catches on expectation). We thoroughly investigate the notion of the defense ratio for Nash equilibria strategy profiles.

Firstly, we precisely characterize the Nash equilibria and defense-optimal graphs in CSD games. This allows us to show that, in equilibrium, the game version of uncoordinated attackers and a single defender is equivalent to the version in which a single leader coordinates the attackers, meaning that both versions of the game have the same defense ratio. We present an LP-based algorithm to compute an exact equilibrium of any given CSD game, whose running time is polynomial in . Then, we focus on tree-graphs. There, we further refine our equilirbium characterization which allows us to derive a polynomial-time algorithm for deciding whether a tree is defense-optimal and, if this is the case, it computes a defense-optimal Nash equilibrium. A tree is defense-optimal if and only if it can be partitioned into disjoint sub-trees. On the other hand, we prove that it is -hard to find a best-defense strategy if the tree is not defense-optimal. We remark that a very crucial parameter for defense-optimality of a graph is the “best” probability with which any vertex of is defended in a NE; we call that probability MaxMin probability and denote it by . Then, for any graph , the defense ratio in equilibrium is shown to be exactly . Although it is hard to exactly compute even for trees, we complement this negative result with a polynomial-time constant-approximation algorithm that computes solutions that are close to the optimal ones for any , for any given general graph. In particular, we approximate the (best) defense ratio of any graph within a factor of . Finally, we provide asymptotically tight bounds for the Price of Defense for any , and almost tight bounds for any other value of .

1.2 Related work

Our graph-theoretic game is a direct generalization of the defense game considered by Mavronicolas et al. [10, 11]. In the latter, the authors examined the case where the size of the defended part of the network is , i.e. where the defender “cleans” an edge. This lead to a nice connection between equilibria and (fractional) matchings in the graph [12]. But when is greater than 2, one has to investigate (as we shall see here) how to sparsely cover the graph by as small a number as possible of connected induced subgraphs of size . This direction can be seen as an extension of fractional matchings to covers of the graph by equisized connected subgraphs. Sparse covering of graphs by connected induced subgraphs (clusters), not necessarily equisized, is a notion known to be useful also for distributed algorithms, since it affects message communication complexity [4].

In another line of work, Kearns and Ortiz [8] study Interdependent Security games in which a large number of players must make individual decisions regarding security. Each player’s safety may depend on the actions of the entire population (in a complex way). The graph-theoretic game that we consider could be seen as a particular instance of such games with some sort of limited interdependence: the actions of the defender and an attacker are interdependent, while the actions of the attackers are not dependent on each other.

Aspnes et al. [3] consider a graph-theoretic game that models containment of the spread of viruses on a network; each node individually must choose to either install anti-virus software at some cost, or risk infection if a virus reaches it without being stopped by some intermediate node with installed anti-virus software. Aspnes et al. [3] prove several algorithmic properties for their graph-theoretic game and establish connections to a certain graph-theoretic problem called Sum-of-Squares Partition.

A game on a weighted graph with two players, the tree player and the edge player, was studied by Alon et al. [1]. At each play, the tree player chooses a spanning tree and the edge player chooses an edge of the graph, and the payoffs of the players depend on whether the chosen edge belongs in the spanning tree. Alon et al. investigate the theoretical aspects of the above game and its connections to the -server problem and network design.

Finally, there is a long line of work on security games [2] where many scenarios are modelled using graph theoretic problems [7, 9, 18, 19].

2 Preliminaries

The game.

A Connected-Subgraph Defense (CSD) game is defined by a graph , a defender, attackers, and a positive integer . Throughout the paper, is considered to be a given parameter of the game. A pure strategy for the defender is any induced connected subgraph of with vertices, which we term -subgraph. For any -subgraph of we denote its set of vertices. Since uniquely defines an induced subgraph of , we will use the term -subgraph to denote either or . The action set of the defender is and we will denote its cardinality by , i.e. . For ease of presentation, we will also refer to as . A pure strategy for each of the attackers is any vertex of . So, the action set of every attacker is , the vertex set of ; we denote and we similarly refer to also as .

To play the game, the defender chooses a defense (mixed) strategy, i.e. a probability distribution over her action set, and each attacker chooses an attack (mixed) strategy, i.e. a probability distribution over the vertices of . We denote a strategy by , i.e. by the probability distribution over enumerated pure strategies, where is the -unit simplex. In a defense strategy each pure strategy is assigned a probability .

We say that a pure strategy of the defender, i.e. a specific -subgraph of , covers a vertex if . A defense strategy covers a vertex if it assigns strictly positive probability to at least one -subgraph of which contains .

Definition 1 (Vertex Probability).

The vertex probability of vertex , is the probability that will be covered, formally  .

The support of a strategy , denoted by supp, is the subset of the action set that is assigned strictly positive probability.

Payoffs and Strategy profiles.

A strategy profile is a tuple of strategies , where denotes the defender’s strategy and denotes the -th attacker’s strategy, . A strategy profile is pure if the support of every strategy has size one. The payoff of every attacker is 1 in any pure strategy profile where she does not choose a defended vertex, and 0 in all the rest. The payoff of the defender in a pure strategy profile where she defends , is the number of attackers that choose a vertex in . Under a strategy profile, the expected payoff of the defender is the expected number of attackers that she catches, which we call defense value, and the expected payoff of the attacker is the probability that she will not get caught. A best response strategy for a participant is a strategy that maximizes her expected payoff, given that the strategies of the rest of the participants are fixed. A Nash equilibrium is a strategy profile where all the participants are playing a best response strategy. In other words, neither the defender nor any of the attackers can increase their expected payoff by unilaterally changing their strategy.

Definition 2 (Defense Ratio).

For a given graph we define a measure of the quality of a strategy profile , called defense ratio of and denoted DR, as the ratio of the total number of attackers over the defense value.

In this work we are only interested in the cases where is an equilibrium. For a given graph, when in equilibrium, the defender’s expected payoff is unique (due to Theorem 1 and Corollary 1 1) and achieves the equilibrium defense ratio DR, where is an equilibrium. The defense strategy in which achieves this defense ratio will be termed best-defense strategy.

Definition 3 (MaxMin Probability, ).

We call MaxMin Probability of a graph the maximum, over all defense strategies, minimum vertex probability in , that is:

As we will show in Lemma 1, the equilibrium defense ratio of a graph turns out to be .

Definition 4 (Price of Defense).

The Price of Defense, , for a given parameter of the game, is the worst defense ratio, over all graphs, achievable in equilibrium, that is:

Definition 5 (Defense-Optimal Graph).

For a given , a graph that achieves the minimum equilibrium defense ratio over all graphs, i.e. , is called defense-optimal graph.

In the following, for ease of presentation, whenever we refer to defense optimality, we implicitly assume that has a fixed value.

3 Nash equilibria

In this section, we provide a characterization of Nash equilibria in CSD games, as well as important properties of their structure which prove useful for the development of our algorithms in the remainder of the paper.

Theorem 1 (Equilibrium characterization).

For a given graph , in any equilibrium with support of the defender and support of each attacker , the following conditions are necessary and sufficient:

  1. is maximized over all defense strategies, and

  2. , where , and

  3. every has the maximum expected total number of attackers on its vertices over all pure strategies.

Proof.

First we will prove that the conditions in the statement of the lemma hold in equilibrium, i.e.  equilibrium is sufficient for the conditions to hold.

Condition 1. By definition, in an equilibrium the defender and each attacker have chosen a best response. Suppose that the defender has chosen some strategy over her action set

, and we will consider this strategy to be a vector variable for now. Given

, each vertex has a vertex probability . Now consider the minimum vertex probability , and the set consisting of the vertices with vertex probability , i.e. . Since an attacker plays a best response, her support will be a subset of ; otherwise, if she assigns probability on a vertex (with ) her expected payoff (see quantity (2)) can be strictly increased by choosing to move all of to another vertex , thus increasing her expected payoff by . Therefore, every attacker’s support will be a subset of .

Now suppose that there are attackers and let us denote the set of attackers by . We will denote by the probability that the strategy of attacker has assigned on vertex . The expected payoff of the defender is:

(1)

Since as we argued above, in an equilibrium, each attacker’s strategy has support that is subset of , the expected payoff of the defender will be

where the first equality is due to the fact that and , and the last equality is due to the fact that the support of any strategy of an attacker is a subset of . In an equilibrium, the defender also plays a best response, i.e. she maximizes her expected utility. Therefore, given the above quantity, the defender in an equilibrium has expected utility , and Condition 1 of the lemma’s statement is satisfied.

Condition 2. The proof is by contradiction. Assume an equilibrium profile where the defender has strategy and there is an attacker, , with strategy whose support includes vertex with , where . Then ’s expected payoff is

(2)

However, can increase her expected payoff by moving all her probability to a vertex for which , which contradicts the equilibrium assumption.

Condition 3. The proof is by contradiction. Suppose that in an equilibrium the defender has strategy , where . According to Condition 1, this strategy achieves , and let us define the set . We denote by

the random variable that indicates the number of attackers on vertex

, under the strategy profile determined by the strategy of the defender and each attacker. The expected utility of the defender is as in (1), or equivalently, . Since, as argued above, in an equilibrium each attacker has support in , the defender’s expected payoff is in fact .

For the sake of contradiction, suppose that for the expected total number of attackers on two different pure defense strategies and it holds that , and equivalently . Then, the expected payoff of the defender can be strictly increased if she chooses a strategy where and . In particular, when the defender plays her expected payoff is

whereas when she plays it is

which contradicts the equilibrium assumption. Therefore, for every pure defense strategy it holds that for every .

Now we will prove that equilibrium is necessary for the three conditions of the statement to hold. Suppose that all conditions hold and is achieved for the defense strategy . We will show that the defender and each attacker play a best response.

Consider an attacker with strategy and support according to Condition 2. Her expected payoff is

It suffices to consider unilateral deviations of to pure strategies. Any pure strategy gives her expected payoff , since (because ). Any pure strategy also gives her expected payoff for the same reason. Finally, any pure strategy gives her expected payoff by the definition of . Therefore every attacker plays a best response.

Now consider the defender with strategy and support . According to Condition 1 of the lemma’s statement, results to vertices of having vertex probability . By Condition 3, for any pure defense strategy it holds that for every , and let us denote . Now consider a unilateral deviation of the defender. Her expected payoff is

where the penultimate equation holds due to the fact that . Therefore, is a best response for the defender, and the three conditions of the lemma’s statement imply a strategy profile that is an equilibrium. ∎

Lemma 1.

For any given graph , the equilibrium defense ratio is , where and is an equilibrium.

Proof.

As it is apparent from Theorem 1, in an equilibrium, every attacker will have in her support only vertices that are defended with probability exactly . Therefore, the expected number of attackers that the defender catches is . By definition of the defense ratio, . ∎

Corollary 1.

The following hold:

  1. [label=()]

  2. For a given graph , in any equilibrium, the expected payoff of the defender and each attacker is unique.

  3. For a given graph , in any equilibrium with support of the defender, for every there exists a vertex such that .

  4. In any CSD game on a graph , the problem of finding the equilibrium defense ratio (or equivalently, ) for attackers reduces to the same problem in the game with attacker, which is a two-player constant-sum game.

Proof.
  1. [label=()]

  2. By Theorem 1, in an equilibrium the defender chooses a strategy that induces probability to some vertex of (Condition 1). Also, each of the attackers has in her support only vertices with vertex probability . Therefore, all attackers attack only such vertices and the expected payoff of the defender is . Consider also an attacker with strategy . Her expected payoff is , where is the vertex probability of vertex . This value is equal to . Since is unique for a graph , the expected payoffs of the defender and each attacker is unique.

  3. The proof is by contradiction. Consider an equilibrium where the defender’s strategy is with support , and there exists a pure strategy for which every vertex has . By Condition 2 of Theorem 1, no attacker has in her support a vertex in . Therefore, the defender can strictly increase her expected payoff by moving all her probability from to some other pure strategy that contains a vertex which is in the support of some attacker.

  4. Observe that for any given graph , the quantity , by definition, only depends on the graph and not the number of attackers . That is, is the same for every . Lemma 1 states that in any equilibrium , it is , therefore the defense ratio in an equilibrium does not depend on . This means that when we are given and we are interested in the equilibrium defense ratio, we might as well consider the game with the single defender and a single attacker. By definition of the game (see Section 2) the latter is a two-player constant-sum game.

The following corollary implies that coordination (resp. individual selfishness) of the attackers cannot increase the attackers’ (resp. defender’s) expected payoff in equilibrium.

Corollary 2.

Every equilibrium with uncoordinated attackers (i.e. as described in Section 2) is an equilibrium with coordinated (i.e. centrally controlled) attackers, and vice versa.

Proof.

Let be a best-defense strategy for the defender. Then, in any best response of any attacker, coordinated or not, every attacker plays only pure strategies that yield maximum payoff against ; i.e. they play only strategies that are defended with probability . If this was not the case, either an uncoordinated attacker could increase her payoff by unilaterally changing her strategy, or the “coordinator” could increase the payoff the attackers collectively get by dictating all the attackers to play vertices that are covered with probability .

So, assume that we have an equilibrium in the uncoordinated case. This is an equilibrium for the coordinated case as well: according to Theorem 1, all attackers play vertices that are defended with probability and thus the expected collective payoff of the attackers cannot be increased, and furthermore the expected total number of attackers on the vertices of a pure strategy that is in the support of the defender is maximized over all pure defense strategies, so no unilateral deviation of the defender can increase her expected payoff.

Conversely, in any equilibrium in the coordinated setting the “coordinator” dictates all the attackers to attack vertices that are covered with probability , satisfying Conditions 1,2 of Theorem 1. Also in the equilibrium of the coordinated setting, similarly to Condition 3 of Theorem 1, the “coordinator” will have placed the attackers in a way such that the vertices of any pure defense strategy in the support have maximum expected total number of attackers over all pure defense strategies; otherwise the defender can increase her expected payoff by neglecting a pure strategy with smaller than maximum expected total number of attackers, and move the probability assigned on that pure strategy to another one that has maximum expected total number of attackers. By Theorem 1, this is an equilibrium also for the uncoordinated setting. ∎

The following theorem provides an algorithm for computing an equilibrium for any CSD game, whose running time is polynomial in when or , where is a constant natural.

Theorem 2.

For some given graph and parameter , there is an algorithm that computes and also finds an equilibrium in time polynomial in .

Proof.

Given a graph , the number of attackers , and some , the action set of the defender is constructed by the vertex sets of at most -subgraphs, so for ’s cardinality it holds that . Consider now the mixed strategy of the defender, where each pure strategy is assigned probability . Consider also the vertex probability for each vertex . According to Corollary 1 1 and 3, the unique in the case of a single attacker can be used to derive an equilibrium for the case of attackers. Therefore, we will find for a single attacker, find an equilibrium for that case, and then extend this equilibrium to one in the case of attackers. In more detail, after we find the defense strategy that maximizes (Condition 1 of Theorem 1), i.e. yields on the set , an equilibrium is achieved if the single attacker assigns probability to each vertex of ; that is because all conditions of Theorem 1 are satisfied. Then, an equilibrium for is achieved if every attacker plays the same strategy as the single attacker; that is because again all conditions of Theorem 1 are satisfied.

The crucial observation that allows us to design such an algorithm is that we can compute

via a Linear Program which has

many variables and constraints, and therefore its running time is in the worst case polynomial in , for . For the trivial cases and , and respectively, therefore and respectively. So in the rest of the proof we will imply that . It remains to show how is computed.

Let us denote . The computation of can be done as follows: First, consider each of the subsets of of size , and find if it is a proper -subgraphs of (i.e. connected); this can be done by running a Depth (or Breadth) First Search algorithm for each subset of size . If it is not, then continue with the next subset. If it is, we consider it in the action set , and assign to it a variable which stands for its assigned probability in a general defense strategy. Now, by definition, for some vertex , . Therefore, we will consider only pure strategies which are -subgraphs to create the ’s. To compute the minimum over all ’s we introduce the variable and write the following set of inequalities as a constraint in our Linear Program:

The variable constraints are and also , and all of the aforementioned constraints can be written in canonical form by applying standard transformations. Finally, the objective function of the Linear Program is variable and we require its maximization, which is the value . ∎

3.1 Connections to other types of games

Although CSD games are defined as a normal form game with players, we can observe that there are equivalent to other well-studied types of games: polymatrix games and Stackelberg games.

A polymatrix game is defined by a graph where every vertex represents a player and every edge represents a two-player game played by the endpoints of the edge. Every player has the same set of pure strategies in every game he is involved and to play the game he plays the same (mixed) strategy in every game. The payoff of every player is the sum they get from every two-player game they participate in. In a CSD game we observe the following. Firstly, the payoff of every attacker depends only on the strategy the defender plays, thus every attacker is involved only in one two-player game. In addition, all the attackers have the same set of pure strategies and they share the same payoff matrix. Similarly, the payoff the defender gets from catching an attacker depends only on the strategy the defender and this specific attacker chose. Hence, the payoff of the defender can be decomposed into a sum of payoffs from two-player games. So, a CSD game can be seen as a polymatrix game where the underlying graph is a star with leaves that correspond to the attackers and the defender is the center of the star. Although many-player polymatrix games have exponentially smaller representation size compared to the equivalent normal-form representation, we should note that this polymatrix game is of exponential size in the worst case since the defender can have exponential in pure strategies to choose from.

A Stackelberg game is an extensive form two-player game. In the first round, one of the players commits to a (mixed) strategy. In the second round, the other player chooses a best response against the committed strategy of her opponent. In a StackeIberg equilirbium the first player is playing a strategy that maximizes her expected payoff, given that the second player plays a best response (mixed strategy). The MaxMin probability for a CSD game on a graph corresponds to a Stackelberg equilibrium. By Corollary 1(c), any CSD game with attackers has the same as that of the case with . Furthermore, as in a Stackelberg game, in the CSD game with the defender chooses a mixed strategy that maximizes her expected payoff, given that the attacker plays a best response (mixed strategy). Therefore, when we are interested in the defense-ratio in equilibrium of a CSD game for some arbitrary , finding a Stackelberg equilibrium of the corresponding CSD game with suffices.

4 Defense-Optimal Graphs

We now focus our attention on defense-optimal graphs. We first characterize defense-optimal graphs with respect to the MaxMin probability and then use this characterization to analyze more specific classes of graphs like cycles and trees. We begin by an exact computation of the equilibrium defense ratio of any defense-optimal graph.

Theorem 3.

In any defense-optimal graph , we have that .

Proof.

First we will show that is a lower bound on the Price of Defense and then prove that it is tight. According to Lemma 1, a lower bound on can be found by equivalently founding an upper bound on over all graphs with vertices. Let us show that for every .

Suppose there is a graph such that , and let us focus only on . Suppose also that the defender has an action set on . Fix the strategy that achieves . Then, by definition of , for the vertex probabilities it holds that for all . Therefore, it is

(3)

Also, by definition of a defense strategy, if denotes the random variable corresponding to the number of vertices that the defender covers, then:

(4)

Let us introduce the indicator variables , , with value 1 if vertex , and 0 otherwise. Then,

(5)

which contradicts (4).

It remains to show that the lower bound on the is tight. This is easy to do by showing that is a tight upper bound on : observe that every vertex of the line graph with vertices, where , can be covered with disjoint pure strategies of the defender. Therefore, the defender can assign probability to each pure strategy, and in that case, . ∎

As an intermediate corollary of Theorem 3 we get the following characterisation of defense-optimal graphs.

Corollary 3.

A graph is defense-optimal if and only if all of its vertices are defended with probability .

Proof.

Necessity of defense-optimality is trivial: every vertex has vertex probability , therefore , so by Theorem 3 the graph is defense-optimal.

Sufficiency of defense-optimality is also easy to see using the equations (4), (4) of the proof of Theorem 3. Suppose that the graph is defense-optimal and consider an equilibrium where the defense strategy is . Then the sum of vertex probabilities is according to the aforementioned equations. Therefore, if there exists a vertex with vertex probability then there is another vertex with probability . This means that , and as a result the graph is not defense-optimal which contradicts our assumption. ∎

Someone may wonder whether Corollary 3 can be further exploited to prove that, in general, best-defense strategies in defense-optimal graphs are uniform, i.e. every pure strategy in the support of the defender is assigned probability . However, as we demonstrate in Figure 1 this is not the case. On the other hand, this claim is true for cyclic graphs and trees.

Figure 1: Here , and is achievable by assigning probability to pure strategy and probability to each of the pure strategies , , , , so the graph is defense optimal. However, observe that cannot participate in more than one pure strategies, so in a uniform defense strategy with support of size , the vertex probability has to be (by definition of uniformity), but it also has to be . Since , this is a contradiction.
Observation 1.

All cyclic graphs are defense-optimal.

Proof.

Consider an arbitrary cyclic graph with vertices. We will show that the graph can achieve vertex probability for every , thus by Corollary 3 it is defense-optimal. Consider the whole action set of the defender, i.e. every path starting from a vertex going clockwise and ending at vertex . Observe that there are only such paths, therefore . By assigning probability to each pure strategy , since each vertex is in exactly pure strategies, each vertex has vertex probability . ∎

4.1 Tree Graphs

In this section we focus on the case where the graph is a tree. We first further refine the characterization of defense-optimal graphs for trees. Then, we utilise this characterisation to derive a polynomial-time algorithm that decides in polynomial time whether a given tree is defense-optimal, and if that is the case, it constructs in polynomial time a defense-optimal strategy for it. On the other hand, in the case where the tree is not defense-optimal, we show that it is -hard to compute a best-defense strategy for it, namely it is -hard to compute . We first provide Lemma 2 which will be used in our polynomial-time algorithm for checking defense-optimality on trees. Henceforth, we write that a graph is covered by a defense strategy if every vertex of the graph is covered by a -subgraph that is in the support of the defense strategy.

Lemma 2.

A tree is defense-optimal if and only if can be decomposed into disjoint -subgraphs.

Proof.

Let be defense-optimal. We will show that the support of any best defense strategy on must comprise of pure strategies that are disjoint -subgraphs which altogether cover every . Since those are disjoint and cover , it follows that their number is in total.

If then the above trivially holds. Assume that and consider a best defense strategy on whose support comprises of a collection of -subgraphs.

Let be a leaf of and let be its parent. Any -subgraph in covering must also cover , since . Also, any -subgraph in covering must also cover , otherwise would be greater than . Now, consider the neighbors of . For those of them that are leaves, the same must hold as holds for , namely and its leaf-children must all be covered by the same exact -subgraph(s).

Consider the case where there is a leaf , such that a single -subgraph contains , its parent , and all the other leaf-children of (and, possibly, other vertices connected to ). Then we can remove this -subgraph from and the corresponding tree from . This leaves the remainder of being a forest comprising of trees , each of which has a (best) defense strategy comprising of the corresponding subset of (the remainder of) on . Notice that it must be the case that every tree , , has size at least (otherwise the initial collection would not have covered ). So, if there is always a leaf in some tree of the forest, such that a single -subgraph contains , its parent , and all the other leaf-children of (and, possibly, other vertices connected to ), we can proceed in the same fashion for each of the ’s, always removing a -subgraph from , and the corresponding vertices from , until we end up with an empty tree. This means that was indeed a collection of disjoint -subgraphs covering .

However, assume for the sake of contradiction that at some “iteration” the assumption does not hold, namely assume that there is a tree in the forest with no leaf , such that a single -subgraph contains , its parent , and all the other leaf-children of (and, possibly, other vertices connected to ). This means that there are (at least) two -subgraphs in , namely , that cover . Due to our initial observations, , together with its parent and all of ’s leaf-children are contained in both and . Since those are different -subgraphs, there is a vertex in the tree which belongs to but does not belong to . Since (due to the fact that is the support of the defense-optimal strategy and Corollary 3), it must hold that there is a different -subgraph, , which covers but does not cover or any of its leaf-children. If also covers a vertex in 111We use for some -subgraphs to denote the set of vertices which are contained in but not in ., then there is a cycle in the tree which is a contradiction. So must not cover vertices in . Since is different to , there must be a vertex in the tree which belongs in but not in (also not in ). Since (due to the fact that is the support of the defense-optimal strategy and Corollary 3), it must hold that there is a different -subgraph, , which covers but does not cover or any of the vertices in . Similarly to before, if covers a vertex in , then there is a cycle in the tree which is a contradiction. So must not cover vertices in or in .

Proceeding in the same way, we result in contradiction since the tree has finite number of vertices and there will need to be an overlap in coverage of some with some , , which would mean that there is a cycle in the tree.

Therefore, there cannot be any overlaps between the -subgraphs of , meaning that comprises of disjoint -subgraphs which altogether cover .

Let be a collection of disjoint -subgraphs that altogether cover . Let the defender play each , , equiprobably, that is, with probability . Then every vertex is covered with probability , meaning that is defense-optimal. ∎

With Lemma 2 in hand we can derive a polynomial-time algorithm that decides if a tree is defense-optimal, and if it is, to produce a best-defense strategy.

Theorem 4.

There exists a polynomial-time algorithm that decides whether a tree is defense-optimal and produces a best-defense strategy for it, or it outputs that the tree is not defense-optimal.

Proof.

The algorithm works as follows. Initially, there is a pointer associated with a counter in every leaf of the tree that moves “upwards” towards an arbitrary root of the tree. For every move of the pointer the corresponding counter increases by one. The pointer moves until one of the following happens: either the counter is equal to , or it reaches a vertex with degree greater of equal to 3 where it “stalls”. In the case where the counter is equal to , we create a -subgraph of , we delete this -subgraph from the tree, we move the pointer one position upwards, and we reset the counter back to zero. If a pointer stalls at a vertex of degree , it waits until all pointers reach this vertex. Then, all these pointers are merged to a single one and a new counter is created whose value is equal to the sum of the counters of all pointers. If this sum is more than , then the algorithm returns that the graph is not defense-optimal. If this sum is less than or equal to , then we proceed as if there was initially only this pointer with its counter; if the new counter is equal to , then we create a -subgraph of and reset the counter to 0; else the pointer moves upwards and the counter increases by one. To see why the algorithm requires polynomial time, observe that we need at most pointers and counters and in addition every pointer moves at most times.

We now argue about the correctness of the algorithm described above. Clearly, if the algorithm does not output that the tree is not defense-optimal, it means that it partitioned into -subgraphs. So, from Lemma 2 we get that is defense-optimal and the uniform probability distribution over the produced partition covers every vertex with probability . It remains to argue that when the algorithm outputs that the graph is not defense-optimal, this is indeed the case. Consider the case where we delete a -subgraph of the (remaining) tree. Observe that the -subgraph our algorithm created deleted should be uniquely covered by this -subgraph in any best-defense strategy; any other -subgraph would overlap with some other -subgraph. Hence, the deletion of such a -subgraph was not a “wrong” move of our algorithm and the remaining tree is defense-optimal if and only if the tree before the deletion was defense-optimal. This means that any deletion that occurred by our algorithm did not make the remaining graph non defense-optimal. So, consider the case where after a merge that occurred at vertex we get that the new counter is . Then, we can deduce that all the subtrees rooted at associated with the counters have strictly less than vertices. Hence, in order to cover all the vertices using -subgraphs, at least two of these -subgraphs cover vertex . Hence, the condition of Lemma 2 is violated. But since every step of our algorithm so far was correct, it means that cannot be covered only by one -subgraph. Hence, our algorithm correctly outputs that the tree is not defense-optimal. ∎

In Theorem 4 we showed that it is easy to decide whether a tree is defense-optimal and if this is the case, it is easy to find a best-defense strategy for it. Now we prove that if a tree is not defense-optimal, then it is -hard to find a best-defense strategy for it.

Theorem 5.

Finding a best-defense strategy in CSD games is -hard, even if the graph is a tree.

Proof.

We will prove the theorem by reducing from 3-Partition. In an instance of 3-Partition we are given a multiset with positive integers where for some and we ask whether it can be partitioned into triplets such that the sum of the numbers in each subset is equal. Let . Observe then that the problem is equivalent to asking whether there is a partition of the integers to triplets such that the numbers in every triplet sum up to . Without loss of generality we can assume that for every ; if this was not the case, the problem could be trivially answered. So, given an instance of 3-Partition, we create a tree with vertices and . The tree is created as follows. For every integer , we create a path with vertices. In addition, we create the vertex and connect it to one of the two ends of each path. We will ask whether .

Firstly, assume that the given instance of 3-Partition is satisfiable. Then, given we create a -subgraph of as follows. If , then we add the corresponding path of to the subgraph. Finally, we add vertex in our -subgraph and the resulting subgraph is connected (by the construction of ). Since the sum of ’s equals , the constructed subgraph has vertices. If we assign probability to every -subgraph we get that for every .

To prove the other direction, assume that and observe the following. Firstly, since as we argued it is for every , it holds that every -subgraph of contains vertex . Thus, and , since there are vertices other than and for each one of them holds that . In addition, observe that . Hence, we get that for every vertex . In addition, observe that every pure defense strategy that covers a leaf of this tree, covers all the vertices of the branch. Hence, for every branch of the tree, all its vertices are covered by the same set of pure strategies; if a vertex that is closer to is covered by one strategy that does not cover the whole branch, then the leaf of the branch is covered with probability less than . So, in order for for every , it means that there exist a -subgraph that exactly covers a subset of the paths; this means that if a -subgraph covers a vertex in a path, then it covers every vertex of the path. Hence, by the construction of the graph, we get that this -subgraph of corresponds to a subset of integers in the 3-Partition instance that sum up to . Since, 3-Partition is -hard, we get that finding a best defense strategy is -hard. ∎

4.2 General Graphs

We conjecture that contrary to checking defense-optimality of tree graphs and constructing a corresponding defense-optimal strategy in polynomial time, it is -hard to even decide whether a given (general) graph is defense-optimal.

Conjecture 1.

It is -hard to decide whether a graph is defense-optimal.

5 Approximation algorithm for

We showed in the previous section that, given a graph , it is -hard to find the best-defense strategy, or equivalently, to compute . We also presented in Theorem 2 an algorithm for computing the exact value of a given graph (and therefore its best defense ratio), but this algorithm has running time polynomial in the size of the input only in the cases or , where is a constant natural. On the positive side, we present now a polynomial-time algorithm which, given a graph of vertices, returns a defense strategy with defense ratio which is within factor of the best defense ratio for . In particular, it achieves defense ratio , where and every , is the vertex probability determined by the constructed defense strategy. We henceforth write that a collection of -subgraphs covers a graph , if every vertex of is covered by some -subgraph in . The algorithm presented in this section returns a collection of at most -subgraphs that covers . Therefore, the uniform defense strategy over assigns probability at least to each -subgraph.

For any collection of -subgraphs and for any , let us denote by the number of -subgraphs in which belongs in. Observe that:

(6)

where denotes the cardinality of .

We first prove Lemma 3, to be used in the proof of the main theorem of this Section. We henceforth denote by and the vertex set and edge set, respectively, of some graph .

Lemma 3.

For any tree of vertices, and for any , we can find a collection of distinct -subgraphs such that for every , it holds that , except maybe for (at most) vertices, where for each of them it holds that .

Proof.

We will prove the statement of the lemma by providing Algorithm 1 that takes as input and and outputs the requested collection of -subgraphs.

0:  A tree graph of vertices, and a natural .
0:  A collection of distinct -subgraphs that satisfies the statement of Lemma 3.
1:  ,  global variable.  % The index of the -subgraph .
2:  ,  global variable.  % Is 0 until the whole tree is covered, then it becomes 1 to allow for the last -subgraph to be completed, if it is not already.
3:  ,  global variable.  % The set of vertices already covered by the algorithm.
4:  ,  global variable.  % The vertex considered to be inserted in a -subgraph.  
5:  
6:  
7:  
8:  Pick an arbitrary vertex of and consider it the root.
9:  
10:  
11:  while  do
12:     while  do
13:        while  do  % The while-loop to ensure that the first element of is uncovered.
14:           if  has a child  then
15:              
16:           else
17:               parent of
18:        while  do   % The while-loop that fills in the current -subgraph .
19:           
20:           
21:           if