1 Introduction
The past few years have witnessed the huge success of game theoretic reasoning in the security domain [Tambe2011, An2017]. Models based on the Stackelberg security game (SSG) have been deployed to protect highprofile infrastructures, natural resources, large public events, etc. (e.g., [Tsai et al.2009, Jain et al.2010, Yin, An, and Jain2014, Fang et al.2016, Basilico et al.2017]). An SSG models the interaction between a defender and an attacker, where the defender commits to a mixed strategy first, and the attacker best responds with knowledge of the defender strategy.
The Stackelberg equilibrium is the standard solution concept for Stackelberg games [Leitmann1978]. In such an equilibrium, no player has the incentive to deviate and the leader assumes that deviations made will result in optimal responses of the follower when evaluating the benefit of deviations. The tiebreaking rules differentiates two forms of Stackelberg equilibria. The strong form of the Stackelberg equilibrium, called the strong Stackelberg equilibrium (SSE), assumes that the follower always breaks ties by choosing the best action for the defender, whereas its counterpart, the weak Stackelberg equilibrium (WSE), assumes that the follower always chooses the worst action. The SSE is commonly adopted as the standard solution concept because the WSE may not exist [Von Stengel and Zamir2004]; and the counterintuitive tiebreaking rule is justified, implicitly or explicitly in the literature, by the assertion that the defender can often induce the favorable strong equilibrium by selecting a strategy arbitrarily close to the equilibrium.
Unfortunately, the assertion may break, especially in scenarios with various resource assignment constraints, such as scheduling constraints in the Federal Air Marshals Service (FAMS) domain, constraints on patrol paths for protecting ports, and constraints in the form of protection externalities [Tsai et al.2009, Jain et al.2010, Shieh et al.2012, Gan, An, and Vorobeychik2015]. Most existing works failed to realize the potential impossibility to induce SSE in such domains. If the desired SSE cannot be induced, results claimed would questionably be overly optimistic. Such overoptimism is problematic in its own right and may even cause greater risks for the following reasons. First, these results may be used in making security resource acquisition decisions, i.e., what combination of security resources need to be procured [McCarthy et al.2016]
; overoptimism of SSE may cause an insufficient number or wrong types of resources to be deployed. Second, statements made based on comparisons between the expected utility of SSE with some heuristic strategies or humangenerated solutions to claim superiority of SSE strategies would be in potential jeopardy
[Pita et al.2008, Tsai et al.2009, Xu et al.2017]. Third, the SSE strategy recommended may not be the optimal one, thus failing in optimizing the use of limited security resources, which is the primary mission of security games.In this paper, we remedy the inadequacy of the SSE in security games and make the following key contributions. 1) We formalize the notion of overoptimism by defining the utility guarantee of the defender’s strategies, and show with a motivating example that the utility claimed to be guaranteed by the SSE is much higher than the actually guaranteed utility. 2) Inspired by the notion of inducible strategy [Von Stengel and Zamir2004], we characterize the solution concept with the highest utility guarantee and call it inducible Stackelberg equilibrium (ISE). 3) We compare ISE with SSE and show that for games with certain structures, the two concepts are equivalent, though in general cases the guaranteed utility of SSE can be arbitrarily worse than that of ISE; in addition, introducing the ISE does not invalidate existing algorithmic results as the problem of computing an ISE polynomially reduces to that of computing an SSE. 4) We provide algorithmic implementation for computing the ISE and conduct experiments to evaluate our results; our experiments unveil the significant overoptimism and suboptimality of the SSE, which suggests the practical significance of the ISE solution.
1.0.1 Other Related Works
To the best of our knowledge, Okamoto12 Okamoto12 are the only exception who have raised the concern of lack of inducibility in security games, though their model is a very specific type of network security games that cannot be generalized to standard security games, especially games with scheduling constraints. Besides that, the more important question regarding the overoptimism due to the lack of inducibility and the algorithmic remedies needed for such overoptimism were left unanswered (in particular, the solution algorithm proposed by Okamoto12 only converges to a local optimum even only in their setting). These questions are addressed in the affirmative in this paper. The concept of inducible target in our paper (Definition 2) is inspired by inducible strategy first proposed by von Stengel and Zamir von04 in their study of general Stackelberg games. However, the focus of their work was solely on characterizing the range of leader’s utility in Stackelberg equilibria with the aim of confirming the advantage of commitment [Von Stengel and Zamir2004, von Stengel and Zamir2010]. Some other works considered potential deviation of the attacker from their optimal responses and proposed solution concepts that were robust to these deviations [Pita et al.2009, Yang et al.2014, Nguyen et al.2013]. Our work differs from this line of research in that we consider perfectly rational attackers.
2 Preliminaries
2.1 Security Games with Arbitrary Schedules
A security game is a twoplayer Stackelberg game played between an attacker and a defender. The defender allocates resources to protect a set of targets . Let . A resource can be assigned to a schedule which covers multiple targets and is chosen from a known and constrained set . The attacker’s pure strategy is choosing one target
to attack, and his mixed strategy can be represented as a vector
wheredenotes the probability of attacking
. The defender’s pure strategy is a joint schedule which assigns each resource to at most one schedule. Let be represented as a vector where indicates whether target is covered in joint schedule . The set of all feasible joint schedules is denoted by . The defender’s mixed strategy is a vector where denotes the probability of playing joint schedule . Let be the coverage vector corresponding to , where is the marginal probability of covering .The payoffs of players are decided by the target chosen by the attacker and whether the target is protected by the defender. The defender’s payoff for an uncovered attack is denoted by and for a covered attack . Similarly, and are attacker’s payoffs respectively. A widely adopted assumption in security games is that and . In other words, covering an attack is beneficial for the defender, while hurts the attacker. Given a strategy profile , the expected utilities for both players are
where is the coverage vector corresponding to . Let and denote the expected utilities of the attacker and defender respectively when is attacked. The illustrated security game model has a wide applicability in many security applications [Kiekintveld et al.2009, Jain et al.2010, Tsai et al.2009, Gan, An, and Vorobeychik2015].
2.2 Stackelberg Equilibria and TieBreaking Rules
In an SSG, the defender acts first by committing to a mixed strategy and the attacker moves after having observed the defender’s commitment. The solution concept of Stackelberg games, called Stackelberg equilibrium, captures the outcome in which the defender’s strategy is optimal, under the assumption that the attacker will always respond optimally to the strategy the defender plays [Leitmann1978]. A pair of strategies forms a Stackelberg equilibrium iff:

is a best response function of the attacker, that satisfies: for all and ;

for all .
A tie represents a situation where multiple best response strategies exist for the attacker. Ties are not rare corner cases, but a fundamentally recurring situation in security games. To achieve maximal usage of defense resources, algorithms avoid allocating too many or too few resources to each target, and in most cases generate a tied solution [Paruchuri et al.2008, Kiekintveld et al.2009]. Thus, a tiebreaking rule – how the attacker breaks ties – plays a central role in security games and is exploited to design efficient algorithms, such as ORIGAMI [Kiekintveld et al.2009]. Different tiebreaking rules lead to different Stackelberg equilibria. The strong Stackelberg Equilibrium (SSE) and the weak Stackelberg Equilibrium (WSE) are two prevailing solution concepts, defined respectively with the optimistic and pessimistic assumptions of the attacker’s tiebreaking behavior:

SSE: for every ;

WSE: for every ;
where is the attack set, the set of all best response pure strategies (targets) for attacker. In words, the attacker breaks ties in favor of the defender in the SSE, while against the defender in WSE.
2.2.1 WSE and SSE
WSE follows the spirit of maximin solution [Sandholm2015], which provides the defender a guaranteed value in the sense that if the attacker breaks ties in a different manner, the defender does not gain less. The SSE, however, does not provide such a value guarantee. Despite this, the security game literature has adopted SSE instead of WSE primarily because a WSE may not exist [Conitzer and Sandholm2006]. In addition, the counterintuitive assumption that the attacker breaks ties in favor of the defender is justified by the assertion that the desired outcome can often be induced by playing a strategy arbitrarily close to the SSE strategy. Kiekintveld et al. Kiekintveld09 were the first who explicitly made such a claim in the security game domain, following the analysis for generic Stackelberg games [Von Stengel and Zamir2004]. Since then, despite a lack of systematic research, the claim has been commonly used to support the SSE in security games of various types, including games with scheduling constraints (e.g., [Jain et al.2010, Varakantham, Lau, and Yuan2013, Gan, An, and Vorobeychik2015]). The idea of SSE is also integrated in real world systems such as the ARMOR deployed at LAX [Pita et al.2008], and IRIS for the Federal Air Marshal Services [Tsai et al.2009]. To see what can go wrong with the SSE assumption, we provide a concrete example in the next section.
3 Motivating Example
Consider an instance shown in the following figure where . The defender has one resource . We first consider the scenario without resource assignment constraints, which has a unique SSE with coverage . In SSE, the attacker will break the tie by attacking . This can be induced by decreasing the coverage on with infinitesimal amount and increasing the coverage on other targets, making be strictly preferred.
However, with resource assignment constraints, the defender cannot decrease the coverage on one target arbitrarily while simultaneously not decreasing coverage on all other targets. Suppose joint schedules as shown in the figure. (There is only one resource.) The game still has a unique SSE where the defender plays and the attacker is assumed to attack , bringing the defender an expected utility of . Such outcome is explicitly or implicitly considered with previous mentioned infinitesimal strategy deviation in security game literature [Jain et al.2010]. Unfortunately, there exists no strategy arbitrarily close to which makes be strictly preferred by the attacker. If is decreased, the attacker will prefer over ; otherwise or will be attacked. Thus, any infinitesimal strategy deviation will cause the attacker to attack , or . The best induced outcome for the defender is only approaching , achieved by decreasing with infinitesimal amount and the attacker is induced to attack .
Can the defender do better than ? The answer is yes. Consider the mixed strategy . The attack set is and the defender can induce the attacker to strictly prefer by playing with infinitesimal . By doing this, the defender can guarantee an expected utility arbitrarily close to , better than . In fact, this is the best outcome that the defender can achieve with infinitesimal strategy deviation. Such optimal outcome is captured by the solution concept called inducible Stackelberg equilibrium (ISE), proposed in the following section.
4 Inducible Stackelberg Equilibrium
The above example reveals a failure of the attempt to induce the desired SSE outcome by playing a strategy arbitrarily close to the SSE strategy. It is natural to ask: Given any strategy , what is the best outcome inducible by playing strategies arbitrarily close to ? Associated with such best outcome, which strategy is optimal? To answer these questions, inspired by the “pessimistic” view of the leader’s payoff in Stackelberg games [Von Stengel and Zamir2004], we define the utility guarantee of a defender strategy as the supremum of the worstcase expected utility that can be achieved by playing a strategy arbitrarily close to the measured one.
Definition 1 (Utility Guarantee).
The utility guarantee of a defender strategy is defined as
(1) 
The utility guarantee is welldefined since the limit superior always exists. It measures the inducibility of a defender strategy: is the optimal outcome at that is inducible via infinitesimal strategy deviation. The aforementioned assumption widely acknowledged in security games falsely claim that any SSE strategy provides utility guarantee , i.e., . Therefore, we need to find the optimal strategy with respect to the utility guarantee. We notice that the optimal utility guarantee coincides with the “pessimistic” leader’s payoff [Von Stengel and Zamir2004] as follows:
(2) 
Inspired by the analysis of “pessimistic” leader’s payoff [Von Stengel and Zamir2004], we introduce several useful notions for defining ISE. The first is inducible target.
Definition 2 (Inducible Target).
A target is inducible iff there exists at least one defender mixed strategy such that .
Inducible target offers the defender a lower bound on the utility guarantee as holds for any inducible target in . The intuition is as follows. Since is inducible, there exists against which is the unique best response for attacker. Thus, from , we can always play with which always makes a unique best response as long as . Then it is easy to verify that the supremum in (1) is always at least , of which the limit is .
The concept of inducible target is insufficient to fully characterize the utility guarantee of a strategy because a pair of targets might be indistinguishable from the attacker’s perspective as they always bring the attacker the same utility irrespective of the strategy the defender plays. Such targets are called identical targets.
Definition 3 (Identical Target).
A pair of targets and are identical iff for any .
Identical targets are noninducible by Definition 2. However, it is possible that the optimal utility guarantee in (1) is achieved via infinitesimal strategy deviation that induces a group of identical targets to be “unique” best responses. Therefore, a more generalized notion, inducible element, is defined to capture this special case. We begin with defining an element.
Definition 4 (Element).
An element is a set of targets in which: i) every pair of targets are identical, and ii) no target is identical to any target not in it.
The reason that we call it an element is as follows. First, from the attacker’s perspective, the element is the generalization on target as it characterizes the extend to which the attacker can distinguish from the perspective of payoffs. Second, with mild assumption which often holds true in practice, one can easily verify that two targets are identical iff they have same payoffs for attacker and they are covered by the same set of schedules. Thus it is easy to enumerate all possible elements. Let be a singleton element if no target in is identical to . The inducible element extends the concept of inducible targets as follows.
Definition 5 (Inducible Element).
An element is inducible iff there exists at least one defender mixed strategy such that .
The observation that inducible target offers a lower bound to utility guarantee extends to inducible element. To show this, we first define the utility function in an elementbased manner. For a defender strategy , we define and as follows
(3)  
One key observation here is that, if is inducible, lower bounds . This follows the similar explanation with inducible targets. Since is inducible, there exists such that by definition and we can “perturb” towards with infinitesimal amount and the attack set becomes exactly . The observation follows as we notice that is a smooth function and thus the change on it with infinitesimal deviation on is bounded.
With singleton element defined, the target set is partitioned into a disjoint element set . It is easy to see that, for any defender strategy , the attack set is always a union of some elements in . Thus, we define , and one can always verify that . can be interpreted as an “attack set” consisting of elements, instead of targets. Let denote the set of inducible elements. The utility guarantee is actually decided by the inducible elements as presented in the following equation.
(4) 
The correctness of this equation formally follows the analysis of “pessimistic” leader’s payoff by von04 von04, and here we provide an intuitive explanation. Since the players’ utility functions are smooth, with infinitesimal strategy deviation, and can be regarded as unchanged for any . Besides, with infinitesimal strategy deviation, it is only possible for defender to “remove” some target out of the attack set, while it is unable for the defender to add a new target into the attack set given the nonzero gap between attacker’s utilities between targets inside and outside the attack set respectively. Thus, since the inducible outcome is defined on the worst tiebreaking rule, i.e., in (1), and infinitesimal strategy deviation won’t change the defender’s utility on any target, the defender always has an intention to reduce the attack set via infinitesimal strategy deviation. It is then noticed that, any inducible element can be the attack set itself with infinitesimal strategy deviation as we shown before, while any element, that is not inducible, cannot be the unique best response element. Besides, the definition of element determines that if one target from element is in the attack set, so as all targets from . Thus, the defender can only get under worstcase tiebreaking rule, when becomes the unique best response element with infinitesimal strategy deviation. Equation (4) then follows.
To this end, we successfully characterize the inducible outcome with welldefined concept of inducible elements, and we are ready to define the concept of inducible Stackelberg equilibrium, which straightforwardly follows the previous analysis
Definition 6 (Ise).
A pair of strategies forms an ISE if the following holds:

;

where .
Tiebreaking rule partially shares the property of as the attacker breaks the ties of elements in favor of the defender. Meanwhile, it behaves as when the attacker breaks the ties of targets from the same element. Notice that ISE successfully addresses the inducibility issue of SSE, and always exists by its definition. In the next section, we conduct extensive analysis to compare ISE with SSE.
5 ISE vs. SSE
In this section, we formally show that when Subsets of Schedules Are Schedules (SSAS) [Korzhyk et al.2011] is satisfied, ISE and SSE are equivalent under mild assumption. However, in general cases the utility guarantee of SSE can be much worse than that of ISE, and we present one such example.
Formally, SSAS states that for all . This can happen, for example, when the defender can choose to bypass arbitrary targets on their patrol route.
Theorem 7.
If SSAS is satisfied, every SSE such that is also an ISE.
Proof.
It is easy to see that when SSAS is satisfied, no pair of targets are identical, as every target covered by is uniquely covered by . Therefore, . We then show that each singleton element is inducible. In other words, is an inducible target. If contains only , then is inducible. If also contains other targets, since SSAS is satisfied, we can construct a defender strategy such that for all supporting strategies that does not contain , and for all other that contains . Thus, the corresponding coverage strictly decreases while the coverage of other targets remain the same. As a result, the attacker will strictly prefers to attack , so is inducible. ∎
Under SSAS, the set of SSE strategies is also a subset of NE strategies [Korzhyk et al.2011]. This suggests the relationship between SSE, ISE and NE strategies illustrated in Figure 1(a). Notice that security games without schedules can be seen as ones with singleton schedules , so that SSAS is satisfied trivially. Although SSAS is valid in many real scenarios, it is risky to regard it as being ubiquitous. For example, in the presence of protection externalities [Gan, An, and Vorobeychik2015, Gan et al.2017], the effect that a defense resource might protect a set of targets within a certain radius can hardly be confined to a specific subset; in FAMS tasks [Tsai et al.2009, Jain et al.2010], when air marshals are allocated to a row of connected flights, it is unrealistic to make them “jump” over only a subset of the schedule. Our example below shows that in general security games, SSE can be arbitrarily worse than ISE in terms of the utility guarantee.
Example1. The example is shown in Figure 1(b). The defender has only one resource. One can verify, the SSE strategy uniformly allocates this resource on in order to make be in the attack set. Unfortunately, is not inducible since is weakly dominated by for the attacker. Therefore, . On the other hand, the ISE strategy uniformly assigns the resource on and which together cover all the targets and .
Example 2. Notice that, in previous examples shown in the paper, a lot of targets have equal payoffs. However, this is only for the convenience of exposition. It is possible that ISE is not SSE when all targets have unequal payoffs. For example, there are 4 targets , , , and . Payoffs for the attacker on a successful attack are 1, 2, 3, and 4, respectively, and on an unsuccessful attack are 1, 2, 3, and 8 respectively. Payoffs for the defender on preventing an attack are 1, 100, 2, and 30 respectively and for failing to cover the attacked target are 1, 0, 2, and 3 respectively. There are two schedules and , and one resource available to the defender. We can easily verify that SSE strategy is and attacker is assumed to attack . However is not inducible, and ISE strategy is and attacker is induced to attack .
6 Computing an ISE
We have shown that ISE mitigates the inducibility risk of SSE which can cause extremely worse performance in utility guarantee. In this section, we further our discussion and show that, from a computational perspective, ISE does not complicate existing solution concepts as the problem of computing an ISE polynomially reduces to that of computing an SSE on the same class of schedules. In addition to the theoretical result, a practical approach is also presented to compute an ISE.
6.1 A Polynomialtime Reduction to Computing an SSE
We start by defining the feasibility of a target. We say a target is feasible if there exists such that . We will henceforth refer to the problem of deciding if a target is feasible or not, the feasibility problem; and of deciding if a target is inducible or not, the inducibility problem. The feasibility and inducibility of an element follow the similar definitions. We first restrict the investigation to games without identical targets. The reduction is presented in Theorem 9, where a series of feasibility checks are incorporated as subprocedures.
Lemma 8.
For any target in security games, the inducibility problem reduces to the feasibility problem on games with the same class of schedules in polynomial time.
Proof sketch.
The intuition behind Lemma 8 is the observation that whenever for all , there is a lower bound of the gap, such that for all , and is bounded by a polynomial in the input size. Blending into the payoffs, we construct a new game such that is inducible in original game if and only if is feasible in the new constructed game. ∎
Theorem 9.
The problem of computing an ISE reduces to the problem of computing an SSE of games with the same class of schedules in polynomial time.
Proof.
An ISE can be computed in the following way:

Check inducibility of every targets and obtain .

For each , solve , which yields the defender’s optimal strategy under the constraint that is an optimal response of the attacker.

Among all the solutions obtained above, find out the one with the highest defender utility. The corresponding target, say, and the optimal defender strategy corresponding to forms an ISE.
Specifically, in Step 1, the inducibility problem reduces to the feasibility problem by Lemma 8. The feasibility of can further be decided by computing the SSE of a game in which defender’s payoff parameters are modified to: and for all ; and and (the attacker’s payoffs remain the same as in the feasibility problem). In this game, even the penalty on is strictly higher than the rewards on all the other targets, so the defender strictly prefers the attacker to choose , irrespective of the coverage of the targets. Therefore, is feasible if is in the attacker’s attack set in every SSE, so we can check whether this is true to decide the feasibility of .
In Step 2, each of the optimizations can be solved, again, by computing the SSE of a game in which defender’s payoff parameters are modified to: and for all ; and and (the attacker’s payoffs remain the same as in the original game). is inducible and hence feasible in the original game, and the feasibility remains in modified game as the attacker’s payoffs are the same. For the same reason above, an SSE must incorporate in the attack set, so that is satisfied. In addition, is maximized in the solution, so the SSE is exactly a solution to the optimization in Step 2.
Therefore, an ISE is obtained via polynomially many calls to the computation of an SSE. This completes the proof. ∎
6.2 Dealing with Identical Targets
In the presence of identical targets, it is assumed that, for every inducible element, the target worst for the defender is to be chosen by the attacker. We keep Step 1 of the procedure in the proof of Theorem 9 by treating identical targets as one target, so that a target is inducible if at least one target identical to it is inducible (even though this target might not actually be induced). However, when we actually compute the defender’s optimal strategy conditioned on a particular inducible target being attacked as in Step 2, we need additional constraints that require this target is worst for the defender among all targets that is identical to it, i.e.,
We convert the constraints to equivalent ones in the form of (as in Step 2) to finish the reduction.
Observe that since and are identical, so the above constraints are equivalent to
which only involve a single variable . Thus, the constraints effectively reduce to an inequality of the form , with two constants and . For , given the objective of the problem as maximizing (which increases with ), this part can be ignored: if the solution does not satisfy , that means there is no feasible solution satisfying ; we simply skip in Step 3. The second half, , can be captured by modifying the attacker’s payoff parameters of an arbitrary target , that is identical to , to , so that the constraint is now equivalent to (this constraint is useless before the modification when it always holds that ).
6.3 Algorithmic Implementation
As a theoretical result, the above reduction involves repeated calls of computing an SSE and therefore falls short on practical performance. We introduce a more concise practical approach to compute an ISE. We first limit our scope to the games without identical target, for the ease of reading. The extension to include identical targets into consideration is fairly straightforward.
First, inducibility of a target can be decided using the following program: is inducible iff the optimum .
(5)  
s.t.  
Solving the above program for each , we obtain the inducible target set . By Proposition 10, the computation of an ISE further converts to computing an SSE of a game restricted to targets in . There is a large body of research on designing algorithms for computing an SSE of security games with various types of schedules, such as ASPEN [Jain et al.2010] and CLASPE [Gan, An, and Vorobeychik2015]. These algorithms can be applied directly. We note that the above approach requires altering schedules in the original game, while all schedules remain the same throughout our theoretical reduction.
Implied by Proposition 10 to compute the “pessimistic” leader’s payoff [von Stengel and Zamir2010], we can directly compute an SSE in a restricted game whose target set is the set of inducible targets in targeted game , and map this SSE to ISE of game .
Proposition 10.
For a security game , an SSE defender strategy of the game is an ISE strategy in , where is the inducible target set of , and .
When identical targets exist, we first enumerate all inducible elements by solving optimization (5) with slight modification, by replacing the target and utility function with the element and elementbased utility function defined in (3), for . An ISE can be computed with the multiLP approach [Conitzer and Sandholm2006], where each LP corresponds to an inducible element as follows
(6)  
s.t.  
The solution with the highest objective among multiple LPs is an ISE. It can be easily verified that the large body of designing algorithms, especially those based on strategy generation techniques, can adapt to solve (6) with little effort.
7 Experimental Evaluation
We evaluate our solution concept and proposed algorithmic implementation with extensive experiments. All results are obtained on a platform with a 2.60 GHz dualcore CPU and 8.0 GB memory. All linear programs are solved using the existing solver CPLEX (version 12.4). The random instances are generated as follows: rewards and penalties are all integers randomly drawn from
and respectively. Each schedule is randomly generated covering a fixed number of targets and each target is ensured to be covered by at least one schedule. The resources are all homogeneous, i.e., for any . Unless otherwise specified, all results are averaged on 100 randomly generated instances.For the purpose of comparison, we define the overoptimism and suboptimality of SSE w.r.t. the utility guarantee.
Definition 11 (Overoptimism and suboptimality).
Let be an SSE strategy.

is overoptimistic if ;

is suboptimal if .
Inducibility We depict the percentage of inducible targets on instances with 100 targets and 1 resource on the left of Figure 2. The results show that with more schedules and more targets per schedule, the game has more inducible targets. That is because the defender can cover the high valued targets with enough resources so that the low valued targets can be induced to become unique best responses. The important observation here is that the percentage is neither too high nor too low (within ), which indicates that the inducibility is not a trivial property.
Scalability We evaluate the scalability of our algorithmic implementation for computing an ISE. The result is shown on the right of Figure 2. The game instances are randomly generated with , , ranges from 50 to 400 with step size of 50, and . We adopt the column generation approach with heuristic bounds for pruning to solve the large scale LPs [Gan, An, and Vorobeychik2015]. As a comparison, the scalability of computing SSE with the same algorithmic framework is also depicted. The result shows that, it takes almost the same computational costs to compute an ISE and an SSE. The algorithmic implementation can compute ISE for largescale instances. Thus, ISE successfully mitigates the inducibility issue of SSE without sacrificing the benefit of scalable algorithms for computng SSE.
Overoptimism and Suboptimality of SSE We examine the overoptimism and suboptimality of SSE. 500 instances are randomly generated with 200 targets, 1 resource, and . This setting fits many realistic security domains, such as the port protection [Shieh et al.2012], where the Coast Guard has few resources (patrol boats) and limited schedules due to complex geographic and efficiency constraints, and each schedule corresponds with one patrolling path visiting several targets. The results are shown in Figure 3, where PeO and PeS denote the percentages of instances with overoptimistic and suboptimal SSE respectively. Moreover, Figure 3
also shows the comparisons between the expected utility of SSE (“SSEu”) with the utility guarantee of SSE (“SSEg”) averaged on instances where SSE is overoptimistic, and similarly the comparisons between average utility guarantees of SSE and ISE (tagged with “SSEg” and “ISEg” respectively) on instances, where SSE is suboptimal. The 95% confidence interval is depicted. The results show that SSE suffers from significant overoptimism and suboptimality, which is highly problematic as we explained in the introduction. We also conduct simulations in a large number of different parameter settings with 3 and more resources. Here we list the results on ten settings in the table on the right.
For each of these settings, we randomly generate 50 instances. Significant numbers of cases with overoptimistic and suboptimal SSE are observed for almost every setting. Thus, the aforementioned risk of applying SSE in practice can be a general issue for many security domains and applications, and we argue that ISE should be considered as a “safer” alternative.
8 Conclusion
This paper reveals the significant potential risk of overoptimism of SSE in security games. We propose a new solution concept, ISE, by exploiting the inducible targets. Our theoretical analysis proves the existence of ISE and its optimality in utility guarantee, and our formal comparisons between ISE and SSE emphasize that ISE is a more suitable solution concept in security games. Extensive evaluation shows that SSE is significantly overoptimistic and ISE achieves significantly higher utility guarantee than SSE. We will investigate the inducibility issues in generic games and Bayesian games in future work.
Acknowledgments
This research was supported by MURI Grant W911NF11 10332 and the National Research Foundation, Prime Minister’s Office, Singapore under its IDM Futures Funding Initiative. Jiarui Gan is supported by the EPSRC International Doctoral Scholars Grant EP/N509711/1. TranThanh Long was supported by the EPSRC funded project EP/N02026X/.
References
 [An2017] An, B. 2017. Game theoretic analysis of security and sustainability. In IJCAI, 5111–5115.
 [Basilico et al.2017] Basilico, N.; Celli, A.; Nittis, G. D.; and Gatti, N. 2017. Coordinating multiple defensive resources in patrolling games with alarm systems. In AAMAS, 678–686.
 [Conitzer and Sandholm2006] Conitzer, V., and Sandholm, T. 2006. Computing the optimal strategy to commit to. In EC, 82–90.
 [Fang et al.2016] Fang, F.; Nguyen, T. H.; Pickles, R.; Lam, W. Y.; Clements, G. R.; An, B.; Singh, A.; Tambe, M.; and Lemieux, A. 2016. Deploying PAWS: Field optimization of the protection assistant for wildlife security. In IAAI, 3966–3973.
 [Gan, An, and Vorobeychik2015] Gan, J.; An, B.; and Vorobeychik, Y. 2015. Security games with protection externalities. In AAAI, 914–920.
 [Gan et al.2017] Gan, J.; An, B.; Vorobeychik, Y.; and Gauch, B. 2017. Security games on a plane. In AAAI, 530–536.
 [Hohn2013] Hohn, F. E. 2013. Elementary matrix algebra. Courier Corporation.
 [Jain et al.2010] Jain, M.; Kardes, E.; Kiekintveld, C.; Ordóñez, F.; and Tambe, M. 2010. Security games with arbitrary schedules: A branch and price approach. In AAAI, 792–797.
 [Kiekintveld et al.2009] Kiekintveld, C.; Jain, M.; Tsai, J.; Pita, J.; Ordóñez, F.; and Tambe, M. 2009. Computing optimal randomized resource allocations for massive security games. In AAMAS, 689–696.
 [Korzhyk et al.2011] Korzhyk, D.; Yin, Z.; Kiekintveld, C.; Conitzer, V.; and Tambe, M. 2011. Stackelberg vs. Nash in security games: An extended investigation of interchangeability, equivalence, and uniqueness. J. Artif. Intell. Res. 41:297–327.
 [Leitmann1978] Leitmann, G. 1978. On generalized Stackelberg strategies. Journal of Optimization Theory and Applications 26(4):637–643.
 [McCarthy et al.2016] McCarthy, S. M.; Tambe, M.; Kiekintveld, C.; Gore, M. L.; and Killion, A. 2016. Preventing illegal logging: Simultaneous optimization of resource teams and tactics for security. In AAAI, 3880–3886.
 [Nguyen et al.2013] Nguyen, T. H.; Yang, R.; Azaria, A.; Kraus, S.; and Tambe, M. 2013. Analyzing the effectiveness of adversary modeling in security games. In AAAI, 718–724.
 [Okamoto, Hazon, and Sycara2012] Okamoto, S.; Hazon, N.; and Sycara, K. P. 2012. Solving nonzero sum multiagent network flow security games with attack costs. In AAMAS, 879–888.
 [Paruchuri et al.2008] Paruchuri, P.; Pearce, J. P.; Marecki, J.; Tambe, M.; Ordóñez, F.; and Kraus, S. 2008. Playing games for security: An efficient exact algorithm for solving Bayesian Stackelberg games. In AAMAS, 895–902.
 [Pita et al.2008] Pita, J.; Jain, M.; Marecki, J.; Ordóñez, F.; Portway, C.; Tambe, M.; Western, C.; Paruchuri, P.; and Kraus, S. 2008. Deployed ARMOR protection: the application of a game theoretic model for security at the Los Angeles international airport. In AAMAS, 125–132.
 [Pita et al.2009] Pita, J.; Jain, M.; Ordóñez, F.; Tambe, M.; Kraus, S.; and MagoriCohen, R. 2009. Effective solutions for realworld stackelberg games: when agents must deal with human uncertainties. In AAMAS, 369–376.
 [Sandholm2015] Sandholm, T. 2015. Solving imperfectinformation games. Science 347(6218):122–123.
 [Shieh et al.2012] Shieh, E.; An, B.; Yang, R.; Tambe, M.; Baldwin, C.; DiRenzo, J.; Maule, B.; and Meyer, G. 2012. PROTECT: A deployed game theoretic system to protect the ports of the United States. In AAMAS, 13–20.

[Tambe2011]
Tambe, M.
2011.
Security and Game Theory  Algorithms, Deployed Systems, Lessons Learned
. Cambridge University Press.  [Tsai et al.2009] Tsai, J.; Kiekintveld, C.; Ordonez, F.; Tambe, M.; and Rathi, S. 2009. IRISA tool for strategic security allocation in transportation networks. In AAMAS, 37–44.
 [Varakantham, Lau, and Yuan2013] Varakantham, P.; Lau, H. C.; and Yuan, Z. 2013. Scalable randomized patrolling for securing rapid transit networks. In IAAI, 1563–1568.
 [Von Stengel and Zamir2004] Von Stengel, B., and Zamir, S. 2004. Leadership with commitment to mixed strategies. Technical Report LSECDAM200401, CDAM Research Report.
 [von Stengel and Zamir2010] von Stengel, B., and Zamir, S. 2010. Leadership games with convex strategy sets. Games and Economic Behavior 69(2):446–457.
 [Xu et al.2017] Xu, H.; Ford, B. J.; Fang, F.; Dilkina, B.; Plumptre, A. J.; Tambe, M.; Driciru, M.; Wanyama, F.; Rwetsiba, A.; Nsubaga, M.; and Mabonga, J. 2017. Optimal patrol planning for green security games with blackbox attackers. In GameSec, 458–477.
 [Yang et al.2014] Yang, R.; Ford, B. J.; Tambe, M.; and Lemieux, A. 2014. Adaptive resource allocation for wildlife protection against illegal poachers. In AAMAS, 453–460.
 [Yin, An, and Jain2014] Yin, Y.; An, B.; and Jain, M. 2014. Gametheoretic resource allocation for protecting large public events. In AAAI, 826–834.
Appendix A Appendix
a.1 Proof of Lemma 8
W.l.o.g., we assume all payoff parameters are integers encoded in binary. To prove Lemma 8, we need the following preliminary results.
Claim 1. Suppose is an invertible matrix and is a vector of size ; all the entries of and are integers and have their absolute values bounded by . Then every component of can be written as a fraction with and being integers, and .
Proof.
Let . From Cramer’s rule, we have
where is with the th column replaced by . Expanding the determinants, we have
where the summation is over all permutations of , and denotes the number of inversions of the permutation . It follows that
The same applies to . ∎
Claim 2. Let . Every vertex of , as a vector, can be written in the form with each and being integers bounded by , where is the bound of the payoff parameters.
Proof.
Let , so equivalently, . Consider a vertex of . Since , must be supported on vertices of , say and can be written as the convex combination: , where and ; In particular, we pick the such that no support set of size smaller than exists, so that are affine independent, or in other words, are linear independent.
If , then is simply a vertex of and is in as is the convex hull of the pure strategy set . Obviously, is in the fractional form with and bounded by .
It remains to consider the case when . Since , it holds that for all . Particularly, for these inequalities, we pick out those not strictly satisfies by and arrange them in the form (so that ); similarly, those strictly satisfies in the form (so that ). We have since otherwise for , where is the number of rows of ; so that does not have full rank and there will exist infinitely many satisfying
(7) 
We show that this will lead to a contradiction. Denote and pick one that satisfies Eq. (7). Let and ; and let and . Observe the follows:

Given that and , we can have and satisfy the same by choosing an sufficiently close to , so that each one of and defines a convex combination, and both and , as convex combinations of , will be in .

Similarly, when is sufficiently close to we can have and arbitrarily close to , so that and will hold.
Therefore, by choosing an sufficiently close to , we can have and so is ; however, since and , this contradicts that is a vertex of . As a result, , and we can choose linear independent rows of to form a submatrix ; denote also the corresponding rows of be . We have and . Note that as are linear independent. By Sylvester’s rank inequality [Hohn2013],
so that and is a fullrank matrix. This gives . By Claim 1, all can be written in the form with and bounded by
(note that the entries of and are bounded by ). Moreover, given that and that has components either or , each component of is in the form with and bounded by
Proof of Lemma 8.
To decide the inducibility of a target is to check if there exists some such that for all . We show that the existence of such a leads to the existence of a such that for all , so that the problems transforms to one of verifying weak satisfaction.
Now suppose that is inducible, and is such that for all . Let . We have and, since is closed, is supported on vertices of , say , in a way such that for some nonnegative values with . Obviously, for all since . Moreover, for each , at least one must strictly satisfy the constraint as otherwise we would have , contradicting that . By Claim 2, can be written in the form with each and bounded by . It follows that whenever , we have for some ; or put it differently,
where , and . Equivalently,
Now that all the coefficients are integers, the above inequality further implies
so that
Now consider the point . We have as it is in the convex hull of . In addition, for all ,
from which we obtain the following key component: