Practical Scalability for Stackelberg Security Games

11/11/2017 ∙ by Arunesh Sinha, et al. ∙ University of Michigan University of Southern California 0

Stackelberg Security Games (SSGs) have been adopted widely for modeling adversarial interactions. With increasing size of the applications of SSGs, scalability of equilibrium computation is an important research problem. While prior research has made progress with regards to scalability, many real world problems cannot be solved satisfactorily yet as per current requirements; these include the deployed federal air marshals (FAMS) application and the threat screening (TSG) problem at airports. Further, these problem domains are inherently limited by NP hardness shown in prior literature. We initiate a principled study of approximations in zero-sum SSGs. Our contribution includes the following: (1) a unified model of SSGs called adversarial randomized allocation (ARA) games that allows studying most SSGs in one model, (2) hardness of approximation results for zero-sum ARA, as well as for the sub-problem of allocating federal air marshal (FAMS) and threat screening problem (TSG) at airports, (3) an approximation framework for zero-sum ARA with provable approximation guarantees and (4) experiments demonstrating the significant scalability of up to 1000x improvement in runtime with an acceptable 5

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The Stackelberg Security Game (SSG) model (and variants) has been widely adopted in literature and in practice to model the defender-adversary interaction in various domains [Tambe2011, Jiang et al.2013, Blum, Haghtalab, and Procaccia2014, Guo et al.2016, Blocki et al.2013, Basilico, Gatti, and Amigoni2009, Munoz de Cote et al.2013]. Over time SSGs have been used to model increasingly large and complex real world problems, hence an important research direction in SSGs is the study of scalable Strong Stackelberg Equilibrium (SSE) computation algorithms, both theoretically and empirically. The scalability challenge has led to the development of a number of novel algorithmic techniques that compute the SSE of SSGs (see related work).

However, scalability continues to remain a pertinent challenge across many SSG applications. There are real world problems that even the best known approaches fail to scale up to, such as threat screening games (TSGs) and the Federal Air Marshals (FAMS) domain. The TSG model is used to allocate screening resources to screenees at airports and solves the problem for every hour (24 times a day). Yet, recent state-of-the-art approach for airport threat screening [Brown et al.2016] (TSG) scales only up to 110 flights per hour whereas 220 flights can depart per hour from the Atlanta Airport [FAA2014]. The FAMS problem is to allocate federal air marshals to US based flights in order to protect against hijacking attacks. Again, the best optimal solver for FAMS in literature [Jain et al.2010] solves problems up to 200 flights (FAMS is an already deployed application), whereas on average 3500 international flights depart from USA daily [USDOT2016]. Further, these problems are fundamentally limited by the hardness of computing the exact solution.

To overcome the computational hardness, and also provide practical scalability we investigate approximation techniques for zero-sum SSGs. Towards that end, our first contribution in this paper is a unified model of SSGs that we name adversarial randomized allocation (ARA) games. ARA captures a large class of SSGs which we call linearizable SSGs (defined later) and it includes TSGs and FAMS. Further, the ARA model provides a unified characterization of implementability of marginal strategies that has been studied in separate papers in literature.

Our second contribution is a set of hardness of approximation results for the class of zero-sum ARA problems, and also for sub-classes such as TSGs. For ARA, we show that the ARA equilibrium computation problem and the defender best response problem in the given ARA game have the same hardness of approximation property and in the worst case ARA is not approximable. Further, we show that even the restricted set of ARA problems given by FAMS and TSGs are both hard to approximate to any logarithmic factor.

Our third contribution is a general approximation framework for finding the SSE for zero-sum ARAs. As an application, we demonstrate the use of this framework for FAMS and TSGs. The approximation approach combines techniques from dependent sampling [Tsai et al.2010]

with domain specific heuristics to yield simple to implement approximation algorithms. We provide theoretical approximation bounds for both FAMS and TSGs.

Finally, as our fourth contribution, we demonstrate via experiments that we can solve FAMS problem up to 3500 flights and TSG problems up to 280 flights with runtime improvements up to 1000x. Moreover, the loss for FAMS problems is less than 5% with increasing flights and for TSGs is less than 1.5% across all cases. Hence, our results provide a practical and simple framework for approximating zero-sum SSGs (more generally ARAs) with provable approximation guarantees. Moreover, our approach enables solving the real world FAMS and airport screening problem satisfactorily for a US wide deployment. All full proofs are in the Appendix.

2 Related Work

Two major approaches to scale up in SSGs include incremental strategy generation (ISG) and use of marginals. ISG uses a master slave decomposition, with the slave providing a defender or attacker best response [Jain et al.2010]. All these approaches are fundamentally limited by the computational complexity of finding an exact solution [Korzhyk, Conitzer, and Parr2010, Xu2016] and thus in some cases ISG has included approximation of defender/attacker best responses [Guo et al.2016, Gan, An, and Vorobeychik2015]. Use of marginals and directly sampling from marginals while faster suffers from the issue of non-implementable (invalid) marginal solutions [Kiekintveld et al.2009, Tsai et al.2010]. Fixing the non-implementability again runs into complexity barriers [Brown et al.2016]. Combinations of marginals and ISG approaches [Bošanský et al.2015] and gradient descent based approach [Amin, Singh, and Wellman2016] have also been used. Our study stands in contrast to these approaches as we aim to approximate the SSE and not compute it exactly, providing a viable alternative to ISG and bypassing the non-implementability of marginals approach. Another line of work uses regret minimization and endgame solving techniques [Moravčík et al.2017, Brown and Sandholm2017] for approximately solving large scale sequential zero sum games. Our game does not have a sequential structure and the large action space ( for TSG and for FAMS; see appendix for the calculation) precludes using a standard no-regret learning approach.

Our approximation approach is inspired by randomized rounding (RR) [Raghavan and Thompson1987]. However, distinct from standard RR, our problem has equality constraints. Previous work on RR with equality constraints address only equality constraints [Gandhi et al.2006] or works on obtaining an integral solution given an approximate fractional solution within a polyhedron with integral vertices [Ageev and Sviridenko2004, Chekuri, Vondrák, and Zenklusen2009]. However, our initial fractional solution may not lie in an integral polyhedron, and we have both equality and inequality constraints. Thus, we provide an approach that exploits the disjoint structure of equality constraints in TSGs in order to use previous work on comb sampling [Tsai et al.2010] and then alters the output [Bansal et al.2012] to handle both equality and inequality constraints.

While there exists other hardness of approximation results for other Stackelberg security game like models [Guo et al.2016] that rely on the graph characteristics of those domains, our hardness of approximation results are the first such results for the simple FAMS and the recent TSG security games problem.

3 Model and Notation

We present a general abstract model of adversarial randomized allocation (ARA). ARA captures all linearizable

SSGs, which is defined as those in which the probability

of defending a target is linear in the defender mixed strategy. The ARA game model is a Stackelberg game model in which the defender moves first by committing to a randomized allocation and the adversary best responds. We start by presenting the defender’s action space. There are defense assets that need to be allocated to objects to be defended. In this model, assets and objects are abstract entities and do not represent actual resources and targets in a security games. We will instantiate this abstract model with concrete examples of FAMS and TSG in the following sub-sections.

Figure 1: Three illustrations: (a) ARA with assets A,B,C and objects O,P,Q with 3 example assignment constraints (shown as dashed lines) with upper bound 1 on the columns. Shown also is an assignment that satisfies these constraints. (b) FAMS problem with 2 flights, 3 schedules and 3 FAMS. S1 and S2 share one flight and so do S2 and S3. The two assignment constraints (for the two flights) with upper bound 1 are represented by the two dashed lines. Additional constraints are present on each row, shown on the right of the matrix. The attacker chooses a flight to attack, hence the dashed lines also show the index set of targets. A sample pure strategy fills the column entries. (c) TSG with the two assignment constraints (resource capacity) with upper bound 7 for XRay and 15 for Metal Detector (MD) represented by the two dashed lines. Additional equality constraints denoting the number of passengers in each passenger category (R,F) are present on each column, shown on the bottom of the matrix. A passenger category (column) is made from risk and flight. An adversary of type can only choose the first column and can choose from the other two columns. Thus, the index set for targets corresponds to columns. A sample pure strategy fills the column entries.

Defender’s randomized allocation of resources: The allocation can be represented as a matrix with the entry denoting the allocation of asset to object , and each . There is a set of assignment constraints on the entries of the matrix. Each assignment constraint is characterized by a set of indexes of the matrix and the constraint is given by , where are non-negative integers. We will refer to each assignment constraint as

. Also for sake of brevity, we denote the vector of all the entries in the matrix as

and as .

Pure strategies of the defender are integral allocations that respect the assignment constraints, i.e., integral ’s such that for all assignment constraints . See Figure 1 for an illustrative example of the assignment constraints and a valid pure strategy. Let the set of pure strategies be and we will refer to a single pure strategy as . On the other hand, the space of marginal strategies are those ’s that satisfy the assignment constraints for all ; note that marginal strategies need not be integral.

Mixed strategies are probability distributions over pure strategies, e.g., probabilities

() over pure strategies . An expected (marginal) representation of a mixed strategy is . Thus, the space of mixed strategies is exactly the convex hull of , denoted as . Typically, the space of marginal strategies is larger than , i.e., , hence all marginal strategies are not implementable as mixed strategies. The conditions under which all marginal strategies are implementable (or not) has an easy interpretation in our model (see the implementability results later in this section).

Adversary’s action: The presence of an adversary sets our model (and SSGs) apart from a randomized allocation problem [Budish et al.2013] and makes ARA a game problem. The attacker’s action is to choose a target to attack. In our abstract formulation a target is given by a set of indexes of the allocation matrix. In order to capture linearizable SSGs, the probability of successfully defending an attack on target is where ’s are given constants such that . The constraint on ensures that . We assume the total number of targets in polynomial in the size of the allocation matrix. Then, as is standard for SSGs, the defender utility for defender strategy and attacker strategy is given by the expected value

where (resp. ) is the defender’s utility when target is successfully (resp. unsuccessfully) defended. As we restrict ourselves to zero-sum games, the attacker’s utility is negation of the above111We remark that modeling-wise the extension to general-sum case, non-linearity in probabilities or exponentially many targets is straightforward; here we restrict the model as it suffices for the domains we consider. See online appendix for the extension..

The problem of Strong Stackelberg equilibrium computation can be stated as: subject to and , where the last constraint represents . Note that the inputs to the SSE problem are the assignment constraints, and the number of pure strategies can be exponential in this input. Thus, even though the above optimization is a LP, its size can be exponential in the input to the SSE computation problem. However, using the marginal strategies instead of the mixed strategies results in a polynomial sized :

But, as stated earlier , and hence the solution to the optimization above may not be implementable as a valid mixed strategy. In our approximation approach we will solve the above as the first step obtaining marginal solution .

Bayesian Extension222Typically player types denotes different utilities but as Harsanyi harsanyi1967games originally formulated, types capture any incomplete information including, as for our case, the lack of information about the action space of adversary. The game is still zero-sum.: We also consider the following simple extension where we consider types of adversary and each adversary type attacks a set of targets such that for all . The adversary is of type with probability (). Then, the exact SSE optimization can be written as: subject to and . A corresponding can be defined in exactly the same way as for original ARA. Next, we show how the FAMS and TSG domain are instances of this abstract ARA model and Bayesian ARA respectively.

Implementability: Viewing the defender’s action space as a randomized allocation provides an easy way to characterize non-implementability of mixed strategies across a wide range of SSGs, in contrast to prior work that have identified non-implementability for specific cases [Korzhyk, Conitzer, and Parr2010, Letchford and Conitzer2013, Brown et al.2016] . The details of this interpretation can be found in the Appendix.

3.1 Fams

We model zero-sum FAMS in the ARA model. The FAMS problem is to allocate federal air marshal (FAMS) to flights to and from US in order to prevent hijacking attacks. The allocation is constrained by the number of FAMS available and the fact that each FAMS must be scheduled on round trips that take them back to their home airport. Thus, the main technical complication arises from the presence of schedules. A schedule is a subset of flights that has to be defended together, e.g., flight f1 and f2 should be defended together as they form a round trip for the air marshal. Air marshals are allocated to schedules, no flight can have more than one air marshal and some schedules cannot be defended by some air marshals. The adversary attacks a flight.

Then, we capture the FAMS domain in the above model by mapping schedules in FAMS to objects (on columns) and air marshal in FAMS to assets (on rows). See Figure 1 for an illustrative example. The assignment constraints include the constraint for each resource : , which states that every resource can be assigned at most once. If an air marshal cannot be assigned to schedule then add the constraint . A target in the abstract model maps to a flight in FAMS, and the set are all the indexes for all schedules that include this flight: flight f is in schedule . The constraint that a flight cannot have more than one air marshal is captured by adding the target allocation constraint . The probability of defending a target (flight) is , hence the weights ’s in ARA are all ones.

3.2 Tsg

We model TSGs using the Bayesian formulation of ARA. The TSG problem is how to allocate screening resources to screenees in order to screen optimally, which we elaborate in the context of airline passenger screening. In TSGs, different TSG resources such as X-Rays, Metal Detector act in teams to inspect an airline passenger. The possible teams are given. Passengers are further grouped into passenger categories with a given number of passengers in each category . The allocation is of resource teams to passenger categories. There are resource capacity constraints for each resource usage (not on teams but on each resource). Further, all passengers need to be screened. Each resource team has an effectiveness of catching the adversary. Observe that, unlike SSGs, the allocation in TSGs is not just binary but any positive integer within the constraints. The passenger category is a tuple of risk level and flight ; the adversary’s action is to choose the flight but he is probabilistically assigned his risk level.

Then, we capture the TSG domain in the above abstract model by mapping passenger categories in TSGs to objects (on columns) and resource teams in TSGs to assets (on rows). See Figure 1 for an illustrative example. The capacity constraint for each resource is captured by specifying the constraint which contains all indexes of teams that are formed using the given resource : team is formed using resource with equal to the resource capacity bound for resource . For every passenger category , the constraint enforces that all passengers are screened. A target in TSG is simply a passenger category , thus, the set is is given passenger category. The probability of detecting an adversary in category is given by , hence the weights are ; since it is easy to check that for any . The adversary type is the risk level , and each type of adversary can choose a flight , thus, choosing a target which is the passenger category . The probability of the adversary having a particular risk level is given.

4 Computation Complexity

In this section, we explore the hardness of approximation for ARAs, FAMS and TSGs. In prior work on computation complexity of SSGs, researchers [Xu2016] have focused on hardness of exact computation providing general results relating the hardness of defender best response (DBR) problem (defined below) to the hardness of exact SSE computation. In contrast, we relate the hardness of approximation of the DBR problem to hardness of approximation of ARAs. We also prove that special cases of ARA such as FAMS and TSGs are also hard to approximate.

First, we formally state the equilibrium computation problem in adversarial randomized allocation: given the assets, objects and assignment constraints of an adversarial randomized allocation problem as input, output the SSE utility and a set of pure strategies and probabilities that represents the SSE mixed strategy. We restrict to be polynomial in the input size. This is natural, since a polynomial time algorithm cannot produce an exponential size output. Also, as discussed in prior literature [Xu2016] the size of the support set of any mixed strategy is one more than the dimension of any pure strategy, which is the poly in our case.

Next, as has been defined in prior literature [Xu2016], we state the defender best response (DBR) problem which will help in understanding the results. The DBR problem can be interpreted as the defender’s best response to a given mixed strategy of the adversary. The DBR problem also shows up naturally as the slave problem in column generation based approaches to SSGs. While it is easy to show the hardness of approximation of given DBR problems, the question of how its relates to the hardness of approximation of SSE computation is open.

Definition 1.

The DBR problem is where is a vector of positive constants. DBR is a combinatorial problem that takes the assignment constraints as inputs, and not .

Next, we state the standard definition of approximation

Definition 2.

An algorithm for a maximization problem is -approximate if it provides a feasible solution with value at least , where is the exact maximum value.

Note that lower means better approximation. Depending on the best

possible, optimization problems are classified into various approximation complexity classes with increasing hardness of approximation in the following order PTAS, APX, log-APX, and poly-APX. We extensively use the well-known approximation preserving AP reduction between optimization problems for our results. We do not delve into the formal definition of complexity classes or AP reduction here due to lack of space and these concepts being standard 

[Ausiello et al.1999]. Our first result shows that the ARA’s approximation complexity is same as that of the DBR problem and in the worst case cannot be approximated.

Theorem 1.

The following hardness of approximation results hold for ARA problems

  • ARA problems cannot be approximated by any bounded factor in polynomial time, unless .

  • If the DBR problem for given ARA problem lies in some given approximation class (PTAS, APX, log-APX, poly-APX), then so does the ARA problem.

Proof Sketch.

The first result works by constructing a ARA from a NP hard unweighted () DBR problem such that the feasibility of the constructed ARA solves the DBR problem, thereby ruling out any approximation. Such unweighted DBR problems exist (for example for FAMS). The second part of the proof works by constructing an ARA problem with one target and showing that the solution yields an approximate value for a relaxed DBR with . Moreover, this solution is an expectation over integral points (pure strategies), thus, at least one integral point in the support set output by ARA also provides an approximation for the corresponding combinatorial DBR. ∎

As the above complexity result is a worst case analysis, one may wonder whether the above result holds for sub-classes of ARA problems. We show that strong versions of inapproximatibility also holds for FAMS and TSGs.

Theorem 2.

TSGs cannot be approximated to any logarithmic factor in poly time, unless P=NP.

Theorem 3.

FAMS problems cannot be approximated to any logarithmic factor in poly time, unless P=NP.

Both the proofs above use AP reduction from max independent set. The proof for TSG follows from an observation that a special case of the TSG problem is the independent set problem itself, thus, any approximation for TSGs can provide an equivalent approximation for the max independent set problem. The proof for FAMS is much more involved. It involves constructing a poly number of FAMS instance (with varying number of resources) given any independent set problem. The FAMS instances are such that pure strategies corresponds to independent sets and the multiple instances are constructed such that the optimal exact solution for one of these instances provides a solution for the max independent set problem. Then, it is shown that any approximation for the FAMS problem yields at least one pure strategy (over all the FAMS instances) that corresponds to a better than approximation for the max independent set problem, thereby completing the AP reduction. See the supplementary material for details.

5 Approximation approach

Our approach to approximation first solves the , which is quite fast in practice (see experiments) and provides an upper bound to the true value of the game. Then, we sample from the marginal solution, but unlike previous work [Tsai et al.2010], we alter the sampled value to ensure that the final pure strategy output is valid. We describe an abstract sampling and alteration approach for ARA in this part, which we instantiate for FAMS and TSGs in the subsequent sub-sections. Recall that a constraint is given by an index set and the constraint is an equality if . For our abstract approach we restrict our attention to ARAs with partitioned equality assignment constraints, which means the index set for all equality constraints partitions the index set of the allocation matrix. Further, for inequality constraints we assume . Call these problems as PE0-ARA; this class still includes FAMS and TSGs. For FAMS, which does not have equality constraints, we use dummy schedules to get partitioned equalities ; denotes that resource is unallocated. Our abstract approximation approach for PE0-ARA is presented in Algorithm 1.

1 forall  EqualityConstraints do
2      
Algorithm 1 Abstract Approximation

The Algorithm takes as input the marginal solution from and produces a pure strategy. The first for loop (line 1-2) performs comb sampling for each equality constraint to produce integral values for the variables involved in . Comb sampling was introduced in an earlier paper [Tsai et al.2010]; it provides the guarantee that is rounded up or down for all , the sample has expected value for all and equality S is still satisfied after the sampling. See Figure 2 for an example. Briefly, comb sampling works by creating buckets of length one each, where , where denotes fractional part. Each of the length fraction is packed into the bucket (in any order and some of the fraction may have to be split into two buckets), then a number between is sampled randomly, say , and for each bucket a mark is put at length . Finally, the whose fraction lies on the marker for each bucket is chosen to be rounded up, and all other are rounded down.

Observe that in expectation the output of comb sampling matches the marginal solution, thus, providing the same expected utility as the marginal solution. Recall that this expected utility is an upper bound on the optimal utility. However, the samples from comb sampling may not be valid pure strategies. Thus, in case the output of comb sampling is not already valid, the two abstract methods in line 3 and 4 modify the sample strategy by first decreasing the integral values to satisfy the violated inequalities and then increasing the integral values to satisfy the equalities. Such modification of the sampled strategy to obtain a valid strategy is guided by the principle that the change in defender utility between the sampled and the resultant valid strategy should be small, which ensures that change in expected utility from the marginal solution due to the modification is small. As the expected utility of the marginal solution is an upper bound on the optimal expected utility this marginal expected utility guided modification leads the output expected utility to be close to the optimal utility.

These two methods on line 3 and 4 are instantiated with domain specific heuristics that implement the principle of marginal expected utility guided modification. Below, we show the instantiation for the TSG and FAMS domains. A sample execution of the above approach for TSGs is shown in Figure 2.

5.1 Tsg

Figure 2: (Left to right) Sample execution for TSG: This uses the same TSG example as in Figure 1. The marginal solution is the leftmost matrix which after CombSampling on each column becomes integral, e.g., 0.5 in the left column is rounded down to 0 and 1.5 rounded up to 2. Note that the CombSampling output satisfies all equalities, but exceeds the resource capacity 7 for X-ray. Next, allocation values are lowered (shown as red circle) to satisfy the X-Ray capacity but the equality constraint on third column gets violated. Next, allocation values are increased (again red circle) to fix the equality and that produces a valid pure strategy.

The heuristics for TSG are guided by three observations: (1) more effective resources are more constrained in their usage, (2) changing allocation for passenger categories with higher number of passengers changes the probability of detection of adversary by a smaller amount than changing allocation for category with fewer passengers and (3) higher risk passenger categories typically have lower number of passengers.

Recall that for TSGs the inequalities are resource capacity constraints. Thus, for fixing violated inequalities we need to decrease allocation which decreases utility; we wish to keep the utility decrease small as that ensures that the expected utility does not move much further away from the upper bound marginal expected utility. Our approach for such decrease in allocation has the following steps: (a) prioritize fixing inequality of most violated resources first and (b) for each such inequality we attempt to lower allocation for passenger category with higher number of passengers. In light of the observations for TSG above this approach aims to keep the change in expected utility small. Specifically, observation 1 makes it likely that constraints for more effective resources are fixed in step a above. Observation 3 suggests that the changes in step b happens for lower risk passengers. Thus, step a aims to keep the allocation of effective resources for high risk passengers unchanged. This keeps the utility change small as changing allocation for high risk passengers can change utility by a large amount. Next, by observation 2, step b aims to minimize the change in probability of detecting the adversary by a low amount so that expected utility change in small. For example in Figure 2, the inequality fix reduces the allocation for the third passenger category (column) which also has the highest number of passengers (15).

Next, the equalities in TSGs are the constraints for every passenger category. For fixing equalities we need to increase allocation which increases utility; we wish to keep this utility increase high as it brings the expected utility closer to the upper bound marginal expected utility. Here we aim to do so by (a) prioritizing increase of allocation for categories with fewer people and (b) increasing allocation of those resources that have least slack in their resource capacity constraint (low slack could mean higher effectiveness). By Observation 1 low slack means that resource could be more effective and by Observation 2 fewer people means higher risk passengers. This ensures that higher risk passengers are screened more using more effective resources thereby raising the utility maximally. For example in Figure 2, the equality for the third column is fixed by using the only available resource MD.

Recall that TSGs differs from FAMS in that the allocation for TSGs can be non-binary. This offers an advantage for TSGs with respect to approximation, as small fractional changes in allocation do not change the overall allocation by much (0.5 to 1 is a 50% change in FAMS, but 4.5 to 5 is less than 10% change for TSGs). Thus, we assume here that the changes due to Algorithm 1 do not reduce the probability of detecting an adversary in any passenger category (from the marginal solution) by more than factor, where is a constant. This restriction is realistic as it is very unlikely that any passenger category will have few passengers and we only aim to change the allocation for passenger categories with a higher number of passengers. Hence we prove the following

Theorem 4.

Assume that Algorithm 1 always successfully outputs a pure strategy and the change in allocation from the marginal strategy does not change the probability of detecting an adversary by more than factor. Then, the approximation approach above with the heuristic provides a -approximation for TSGs.

As a remark, the above result does not violate the inapproximatability of TSGs since the above result holds for a restricted set of TSG problems. Also, the approximation for TSGs may sometimes fail to yield a valid pure strategy as satisfying the equalities may become impossible after using certain sequences of decreasing allocation. In our experiments we observe that the failure of obtaining a pure strategy after Algorithm 1 is rare and also easily handled by repeating the approximation approach with a new sample (approximation runs in milli-secs).

5.2 Fams

Recall that for FAMS the inequalities are the target allocation constraints: and fixing violations for these involves decreasing allocation. Our heuristic is simple: the variables are set to zero (i.e., decreased) starting from those schedules that contain the most number of targets for which target allocation constraint is violated and do not contain any target for which the target allocation constraint is satisfied. We can guarantee to find a decrease in allocation that satisfies the constraint for without changing the allocation for targets that already satisfy constraints in the cases when the target with violated constraint (1) belongs to a schedule that exclusively contains that target ( can be decreased without affecting any other constraint) or (2) belongs to only one schedule (all other targets in this schedule will violate their constraints). This approach ensures that we only work to fix the violated constraints and cause a minimal change in utility by leaving the satisfied constraints undisturbed. However, if in fixing a violated target allocation constraint for it becomes necessary to reduce allocation for another already satisfied target constraint, then sample uniformly from the schedules that belongs to in order to choose the allocation to reduce. Do this till all inequality constraints are satisfied.

Then, we do nothing to fix equality constraints since we have only decreased and if any equality is not satisfied we can always set the dummy to be one. Also, observe that since we only always decrease allocations, we always find a pure strategy for any sample from Algorithm 1 (unlike TSGs). We prove the following:

Theorem 5.

Let be the number of targets that share a schedule with any target , and . The approximation approach above with the heuristic provides a -approximation for FAMS.

Next, we show experimental results that reveal the average case loss of our approximation approach (as opposed to the worst case approximation guarantees in this section).

6 Experimental Results

(a) Runtime (log-scale time)
(b) Solution quality
Figure 3: ASPEN and RAND Comparison
(a) Runtime (log-scale time)
(b) Solution quality
Figure 4: MGA and RAND Comparison

Our set of experiments provide a comprehensive analysis of our randomized approach, which we name RAND. We compare RAND to the best know solver for TSGs called MGA; MGA [Brown et al.2016] has been previously shown to outperform column generation based approaches by a large margin. For the FAMS problem the best known solver in literature for the general sum case is called the ASPEN [Jain et al.2010], which is a column generation based branch and price approach. Through private communication with the company (Avata Intelligence) managing the FAMS software, we know the FAMS problem is solved as a zero-sum problem for scalability and takes hours to complete. For our zero-sum case we implemented a plain column generation (CG) solver for FAMS, since branch and price is an overkill for the zero sum case. All results in this section are averaged over 30 randomly generated game instances. All game instances fix to and randomly select from integers between and

. The utility for RAND is computed by generating 1000 pure strategies and taking their average as an estimate of the defender mixed strategy. All experiments were run using a system with Xeon 2.6 GHz processor and 4GB RAM.

For FAMS, we vary the number of flights, keeping the number of resources fixed at 10 and number of schedules fixed at 1000 and 5 targets/schedule. The runtime in log scale is shown in Figure 3. CG hits the 60 min cut-off for 700 flights and the run time for RAND is much lower at only a few seconds. Next, we report the solution quality for RAND by comparing with the solution using CG. It can be seen that the solution gets better with increasing flights starting from 19% loss at 300 flights to 5% loss at 900 flights. Thus, the numbers show that we obtain large speed-ups up to factor of 1000x and are still able to extract 95% utility for higher number of flights when the CG approach starts taking huge amount of time.

For TSGs, we used six passenger risk levels, eight screening resource types and 20 screening team types. We vary the number of fights and we also randomly sample the team structure (how teams are formed from resources) for each of the 30 runs. The results in Figure 4 show runtime (in log scale) and defender utility values varying with number of flights (on x-axis). As can be seen, MGA only scales up to 110 flights before hitting the cut-off of 60 mins, while RAND takes only 10 sec for 110 flights. Also, the solution quality loss for RAND has a maximum averaged loss of 1.49%. Thus, we obtain at-least 360X speed-ups with very minor loss.

Finally, we test the scalability of RAND for FAMS and TSG, shown in Figure 5. As can be seen, the runtime for RAND is low even with the highest number of flights we tested: 280 for TSG and 3500 for FAMS. The maximum runtime for FAMS was under 5 seconds. With TSGs the maximum runtime was under 25 secs. Overall, these results demonstrate the scalability of our approach with minor solution quality loss on average. Importantly, the scalability results shows that RAND can solve the FAMS and TSG problem for sizes that are required for a US wide deployment, and the results also provide evidence that further scalability with increasing number of flights is also possible. This also provides evidence that approximations is a viable approach to solving zero-sum SSGs at a large scale.

(a) Runtime FAMS
(b) Runtime TSG
Figure 5: Scalability of RAND

7 Conclusion

We studied approximations in SSGs both theoretically and practically. In fact, while our paper title talks about half a loaf (optimal solution), we practically obtain up to 97% of the loaf (optimal solution). We obtain 1000x speedups with losses within 5%. Thus, approximations open the door to solving very large scale zero-sum SSGs and we believe that we have laid a fertile ground for future research in approximating large scale zero-sum security games. Our approach also provided an avenue to solve the real world FAMS and airport screening problem in order to increase the scope of applicability of these already deployed application (in case of FAMS) or applications under test (airport screening).

References

  • [Ageev and Sviridenko2004] Ageev, A. A., and Sviridenko, M. I. 2004. Pipage rounding: A new method of constructing algorithms with proven performance guarantee.

    Journal of Combinatorial Optimization

    8(3):307–328.
  • [Amin, Singh, and Wellman2016] Amin, K.; Singh, S.; and Wellman, M. 2016. Gradient methods for stackelberg security games. In

    Conference in Uncertainty in Artificial Intelligence

    .
  • [Ausiello et al.1999] Ausiello, G.; Protasi, M.; Marchetti-Spaccamela, A.; Gambosi, G.; Crescenzi, P.; and Kann, V. 1999. Complexity and Approximation: Combinatorial Optimization Problems and Their Approximability Properties.
  • [Bansal et al.2012] Bansal, N.; Korula, N.; Nagarajan, V.; and Srinivasan, A. 2012. Solving packing integer programs via randomized rounding with alterations. Theory of Computing 8(1):533–565.
  • [Basilico, Gatti, and Amigoni2009] Basilico, N.; Gatti, N.; and Amigoni, F. 2009. Leader-follower strategies for robotic patrolling in environments with arbitrary topologies. In AAMAS, 57–64.
  • [Blocki et al.2013] Blocki, J.; Christin, N.; Datta, A.; Procaccia, A.; and Sinha, A. 2013. Audit games. In IJCAI.
  • [Blum, Haghtalab, and Procaccia2014] Blum, A.; Haghtalab, N.; and Procaccia, A. D. 2014. Learning optimal commitment to overcome insecurity. In NIPS, 1826–1834.
  • [Bošanský et al.2015] Bošanský, B.; Jiang, A. X.; Tambe, M.; and Kiekintveld, C. 2015. Combining compact representation and incremental generation in large games with sequential strategies. In AAAI.
  • [Brown and Sandholm2017] Brown, N., and Sandholm, T. 2017. Safe and nested subgame solving for imperfect-information games. arXiv preprint arXiv:1705.02955.
  • [Brown et al.2016] Brown, M.; Sinha, A.; Schlenker, A.; and Tambe, M. 2016. One size does not fit all: A game-theoretic approach for dynamically and effectively screening for threats. In AAAI.
  • [Budish et al.2013] Budish, E.; Che, Y.-K.; Kojima, F.; and Milgrom, P. 2013. Designing random allocation mechanisms: Theory and applications. The American Economic Review 103(2):585–623.
  • [Chekuri, Vondrák, and Zenklusen2009] Chekuri, C.; Vondrák, J.; and Zenklusen, R. 2009. Dependent randomized rounding for matroid polytopes and applications. arXiv preprint arXiv:0909.4348.
  • [FAA2014] FAA. 2014. Airport capacity profiles. https://www.faa.gov/airports/planning_capacity/profiles/media/Airport-Capacity-Profiles-2014.pdf. Accessed: 2017-02-15.
  • [Gan, An, and Vorobeychik2015] Gan, J.; An, B.; and Vorobeychik, Y. 2015. Security games with protection externalities. In AAAI.
  • [Gandhi et al.2006] Gandhi, R.; Khuller, S.; Parthasarathy, S.; and Srinivasan, A. 2006. Dependent rounding and its applications to approximation algorithms. Journal of the ACM (JACM) 53(3):324–360.
  • [Guo et al.2016] Guo, Q.; An, B.; Vorobeychik, Y.; Tran-Thanh, L.; Gan, J.; and Miao, C. 2016. Coalitional security games. In AAMAS.
  • [Harsanyi1967] Harsanyi, J. 1967. Games with incomplete information played by” bayesian” players, i-iii part i. the basic model. Management Science 14(3):159–182.
  • [Jain et al.2010] Jain, M.; Kardeş, E.; Kiekintveld, C.; Tambe, M.; and Ordóñez, F. 2010. Security games with arbitrary schedules: a branch and price approach. In AAAI, 792–797.
  • [Jiang et al.2013] Jiang, A. X.; Procaccia, A. D.; Qian, Y.; Shah, N.; and Tambe, M. 2013. Defender (mis) coordination in security games. AAAI.
  • [Kiekintveld et al.2009] Kiekintveld, C.; Jain, M.; Tsai, J.; Pita, J.; Ordóñez, F.; and Tambe, M. 2009. Computing optimal randomized resource allocations for massive security games. In AAMAS, 689–696.
  • [Korzhyk, Conitzer, and Parr2010] Korzhyk, D.; Conitzer, V.; and Parr, R. 2010. Complexity of computing optimal Stackelberg strategies in security resource allocation games. In AAAI.
  • [Letchford and Conitzer2013] Letchford, J., and Conitzer, V. 2013. Solving security games on graphs via marginal probabilities. In AAAI.
  • [Moravčík et al.2017] Moravčík, M.; Schmid, M.; Burch, N.; Lisý, V.; Morrill, D.; Bard, N.; Davis, T.; Waugh, K.; Johanson, M.; and Bowling, M. 2017. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science.
  • [Munoz de Cote et al.2013] Munoz de Cote, E.; Stranders, R.; Basilico, N.; Gatti, N.; and Jennings, N. 2013. Introducing alarms in adversarial patrolling games: Extended abstract. In AAMAS.
  • [Raghavan and Thompson1987] Raghavan, P., and Thompson, C. D. 1987. Randomized rounding: a technique for provably good algorithms and algorithmic proofs. Combinatorica 7(4):365–374.
  • [Tambe2011] Tambe, M. 2011.

    Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned

    .
  • [Tsai et al.2010] Tsai, J.; Yin, Z.; Kwak, J.-y.; Kempe, D.; Kiekintveld, C.; and Tambe, M. 2010. Urban security: Game-theoretic resource allocation in networked physical domains. In AAAI.
  • [USDOT2016] USDOT. 2016. Bureau of transportation statistics. https://www.transtats.bts.gov/Data_Elements.aspx?Data=2. Accessed: 2017-02-15.
  • [Xu2016] Xu, H. 2016. The mysteries of security games: Equilibrium computation becomes combinatorial algorithm design. In The 17 ACM Conference on Economics and Computation (ACM-EC).

Appendix A Appendix

a.1 ARA complex model

It is easy to change the probability to be so that the probability is non-linear (as given by f) in the entries. Also, clearly increasing number of targets to be large is allowed as targets can be any subset of the indexes. A general sum variant can also be stated by specifying the adversary utilities in terms of . Of course, as in SSGs, the general sum case cannot be written as a LP but can be written as multiple optimizations or a MILP.

a.2 Implementability

Viewing SSGs as ARAs provides an easy way of determining implementability using results from randomized allocation [Budish et al.2013]. First, we define bi-hierarchical assignment constraints as those that can be partitioned into two sets such that two constraints in the same partition ( or ) it is the case that either or or . Then, we obtain the following sufficiency result

Proposition 1.

All marginal strategies are implementable, or more formally , if the assignment constraints are bi-hierarchical.

Proof.

The proof is just an application of Thm 1 in [Budish et al.2013]. ∎

In contrast to prior work that have identified cases of non-implementability [Korzhyk, Conitzer, and Parr2010, Letchford and Conitzer2013, Brown et al.2016] for specific cases, this provides an easy way to characterize non-implementability across a wide range of SSGs. As Figure 1 reveals, both FAMS and TSG have non-implementable marginals due to overlapping constraints. Next, as defined in [Budish et al.2013], canonical assignment constraints are those that impose constraints on all rows and columns of the matrix (with possibly additional constraints), using which we obtain the following necessity result

Proposition 2.

Given canonical assignment constraints, if all marginal strategies are implementable then the assignment constraints are bi-hierarchical.

Proof.

The proof is just an application of 2 in [Budish et al.2013]. ∎

a.3 Proofs

Proof of Theorem 1:

Proof.

For the first part, given any ARA problem with a NP hard DBR-U instance (for the decision version of DBR-U), we construct another ARA instance such that the feasibility problem for that ARA instance solves the DBR-U decision problem. Thus, as the feasibility is NP Hard, there exists no approximation. First, since ARA problem is so general there exists DBR-U problems that are NP Hard. For example the DBR-U problem for FAMS has been shown to be NP Hard [Xu2016]. Given the hard DBR-U problem, form an ARA problem with by adding the constraint . Also, let there be only one target in the problem, so that the objective becomes insted of and all constraints in the optimization are just the marginal space constraints and . Now, the existence of any solution of the optimization gives a feasible point as , where the integral points is solution to the question of does there exist a solution of the DBR-U optimization problem with value . Next, a binary search on can answer the decision DBR-U problem of whether there exists a solution for decision DBR-U with value . Thus, existence of any solution for ARA is NP Hard, thus, no approximation is easy.

For the second part, we present a AP approximation preserving reduction (with problem mapping that doe not depend on approximation ratio); such a reduction preserves membership in PTAS, APX, log-APX, Poly-APX (see ). Given any DBR problem, we construct the ARA problem with one target such that . Choose the weights ’s such that and . Observe that is computable efficiently and , thus, the ARA is well-defined. Thus, due to just one target, the ARA optimization is same as . Suppose we can solve this problem with approximation with the solution mixed strategy being for some pure strategies . Now, since we also know that this solution also provides approximation for DBR-C. Let the optimal solution for DBR-C be ; note that is also the optimal solution for DBR. provides a solution value . Further, as the objective is linear in and , it must be the case that there exists a such that . Thus, since , provides approximation for DBR. Since, the number of the pure strategies in support of is polynomial, can be found in polynomial time by a linear search. ∎

Proof of Theorem 2:

Proof.

Given an independent set problem with vertices, we construct a TSG with team types, where each team type in corresponds to a vertex. The team is special in the sense that it does not correspond to any vertex and it is made up of just one resource with a very large resource capacity . Construct just one passenger category with passengers . Since, there is just one passenger category (and target) we will use as the matrix entries instead of . Choose and and efficiencies for all teams, except . Then, the objective of the integer LP is where is a vector with first components as 1 and last component as 0.

Next, have resources for every edge with resource capacity . This provides the inequality . Also, we have . Inspection of every passengers provides the constraints . Treating as a slack, we can see that the constraint and are redundant. For the left over constraints , we can easily check that any valid integral assignment (pure strategy) is an independent set. Moreover, the objective tries to maximize the independent set. The optimal value of this optimization over is an extreme point which is integral and equal to the maximum independent set OPT. Thus, suppose a solution to the SSE problem with value . Further, as the objective is linear in and , it must be the case that there exists a such that . Thus, since , provides approximation for maximum independent set. Since, the number of the pure strategies in support of is polynomial, can be found in polynomial time by a linear search.

Proof of Theorem 3:

Proof.

We provide an AP reduction from independent set. As max independent set is poly APX complete, this rules out any log factor approximation.

Given an independent set maximization problem with vertices and edges construct the following FAMS problems, one for each . Use resources. All resources can be assigned to any schedule. Construct schedules corresponding to the vertices . Construct target corresponding to every edge such that and . All ’s have the same value for being defended or undefended and that value is ; thus, these targets do not need to covered but impose the constraint that and cannot be simultaneously defended. Thus, it is clear that any allocation of resources to corresponds to an independent set. Next, consider additional targets and expand the schedules such that . Further, add more singleton schedules with . All additional targets provide value when defended and otherwise. Thus, the expected utility of defending an additional target given coverage is .

First, assume we have a poly time algorithm to approximately compute the SSE with approx factor (). We will run this poly time algorithm with resources to which is again a poly time overall, and also the overall output size is poly. For the given max independent set problem, let the solution be . Observe that for problems with resources where , all valuable (additional) targets can be covered by covering schedules with targets in and using the remaining resources to cover the remaining targets. This provides utility of for the SSE. In particular, the utility with resources is . Also note that for every problem, there is always a trivial allocation of resources to the singleton schedules such that coverage of each target is . (this is deducible as the allocation to singleton schedules is unconstrained and can be implemented in poly time by Birkhoff-von Neumann result as provided in [Korzhyk, Conitzer, and Parr2010]). This trivial allocation provides an utility 0.

Also the following result will be useful: given approximation factor for the case with resources then one of the pure strategy output for this case will have at least schedules among covered where . Before proving the above note that by definition and .

To prove the result in last paragraph consider the contra-positive: suppose all pure strategies output cover at most schedules among , then in each pure strategy at least targets are not covered (since 2 targets are covered for the schedules and rest of resources can cover only 1 target). Then the coverage of the least covered target in the mixed strategy formed using such pure strategies is . The utility for this least covered target is . The overall utility has to be lower than utility for any target, hence the utility is . The optimal utility is . Thus, by definition of approximation ratio we must have or re-arranging but by definition of we must have hence a contradiction.

The above result also shows that at least one pure strategy yields an independent set of size . As is the max size of independent sets we obtain and approximation ratio for the max independent set problem such that . Thus, we obtain a better approximation given approximation for the SSE. Thus, we have an AP reduction. ∎

Proof of Theorem 5:

Proof.

Consider the event of a target having an infeasible assignment after the comb sampling. Call this event . Let be the event that resource covers this target . Then, . From the guarantees of comb sampling we know that and . Also, by comb sampling if then for any . Next, we know that is the probability that the any of the other is assigned a one, which is the probability that all other are assigned 0. Thus,

Let . Considering the fact that , we get

where the last inequality is due to the fact that .

Thus, . Next, we know from standard sum of squares inequality that . Thus, we get The RHS is maximized when , thus, . Also, then

Now consider the coverage of target : . According to our algorithm the allocation for target continues to remain with probability if its allocation is already feasible after comb sampling (and we always obtain a pure strategy). This is because this target shares schedules with other targets and thus in the worst case may be reduced with probability for each of the targets. We do a worst case analysis and assume that no resource is allocated to a target when the sampled allocation is infeasible for that target. Thus, let

denote the random variable denoting that target

is covered. Thus, . Now, is same as and we assumed the worst case of . Thus, we have . As the utilities are linear in , we have the utility for as , where is the utility under the marginal . Thus, if is the choice of adversary under the marginal we know that is the lowest utility for the defender over all targets . Hence, we can conclude that the utility with the approximation is at least

Proof of Theorem 4:

Proof.

The main assumption in the proof is that the steps after after comb sampling changes the probability of detecting an adversary in passenger category by at most . Also, by assumption of the theorem since Algorithm 1 does not fail ever, the change in utility for any passenger category is at most a factor of . By similar reasoning as for FAMS, we conclude that this provides a -approximation. ∎