A Game Model of Search and Pursuit

11/23/2018
by   Steve Alpern, et al.
Warwick Business School
0

Shmuel Gal and Jerome Casas have recently introduced a game theoretic model that combines search and pursuit by a predator for a prey animal. The prey (hider) can hide in a finite number of locations. The predator (searcher) can inspect any k of these locations. If the prey is not in any of these, the prey wins. If the prey is found at an inspected location, a pursuit begins which is successful for the predator with a known capture probability which depends on the location. We modify the problem so that each location takes a certain time to inspect and the predator has total inspection time k. We also consider a repeated game model where the capture probabilities only become known to the players over time, as each successful escape from a location lowers its perceived value capture probability.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

03/08/2021

A Classical Search Game in Discrete Locations

Consider a two-person zero-sum search game between a hider and a searche...
02/13/2019

Search and Rescue in the Face of Uncertain Threats

We consider a search problem in which one or more targets must be rescue...
06/16/2022

Planning and Formulations in Pursuit-Evasion: Keep-away Games and Their Strategies

We study a pursuit-evasion problem which can be viewed as an extension o...
07/16/2018

Adapting the Predator-Prey Game Theoretic Environment to Army Tactical Edge Scenarios with Computational Multiagent Systems

The historical origins of the game theoretic predator-prey pursuit probl...
06/18/2020

Apollonius Allocation Algorithm for Heterogeneous Pursuers to Capture Multiple Evaders

In this paper, we address pursuit-evasion problems involving multiple pu...
03/29/2021

Pursuer Assignment and Control Strategies in Multi-agent Pursuit-Evasion Under Uncertainties

We consider a pursuit-evasion problem with a heterogeneous team of multi...
04/30/2012

A Game-Theoretic Model Motivated by the DARPA Network Challenge

In this paper we propose a game-theoretic model to analyze events simila...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Traditionally, search games and pursuit games have been studied by different people, using different techniques. Pursuit games are usually of perfect information and are solved in pure strategies using techniques involving differential equations. Search games, on the other hand, typically require mixed strategies. The first attempt to combine these games was the elegant paper of Gal and Casas (2014). In their model, a hider (a prey animal in their biological setting) begins the game by choosing among a finite set of locations in which to hide. The searcher (a predator) then searches (or inspects) of these locations, where is a parameter representing the time or energy available to the searcher. If the hiding location is not among those inspected, the hider wins the game. If the searcher does inspect the location containing the hider, then a pursuit game ensues. Each location has its own capture probability, known to both players, which represents how difficult the pursuit game is for the searcher. If the search-predator successfully pursues and captures the hider-prey, the searcher is said to win the game. This is a simple but useful model that encompasses both the search and the pursuit portions of the predator-prey interaction.

This paper has two parts. In the first part, we relax the assumption of Gal and Casas that all locations are equally easy to search. We give each location its own search time and we give the searcher a total search time. Thus he can inspect any set of locations whose individual search times sum to less than the searcher’s total search time, a measure of his resources or energy (or perhaps the length of daylight hours, if he is a day predator). We consider two scenario. In the first, there are an odd number of hiding locations which as their indices increase take longer to search and have lower probabilities of successful pursuit. In the second, we consider that there are many hiding locations, but they come in only two types, identifiable to the players. Locations within a type have the same search time and the same capture probability. There may be any number of locations of each type.

The second part of the paper relaxes the assumption that the players know the capture probability of every location precisely. Rather, we assume that a distribution of capture probabilities is known. The players can learn these probabilities more precisely by repeated play of the game. We analyze a simple model with only two locations and two periods, where one location may be searched in each period. While simple, this model shows how the knowledge that the capture probabilities will be updated in the second period (lowered at a location where there was a successful escape) affects the optimal play of the game.

2 Literature Review

An important contribution of the paper of Gal and Casas discussed in the Introduction is the determination of a threshold with respect to the number of locations which the searcher can inspect. If this is sufficiently high, for example if he can inspect all locations, then the hider adopts the pure strategy of choosing the location for which the probability of successful pursuit is the smallest. On the other hand, if is below this threshold (say the hider mixes his location so that the probability of being at a location multiplied by its capture probability (the desirability of inspecting such a location) is constant over all locations.

The paper of Gal and Casas (2014) requires that the searcher knows his resource level (total search time) In a related but not identical model of Lin and Singham (2016) it is shown that sometimes the optimal searcher strategy does not depend on

Alpern, Gal, and Casas (2015) extended the Gal-Casas model by allowing repeated play in the case where the hider is found but the pursuit at his hiding location is not successful for the searcher (pursuer). They found that the hider should choose his location more randomly when the pursuing searcher is more persistent.

More recently, Lidbetter and Hellerstein (2017) introduced an algorithm similar to that of the fictitious play where the searcher recursively updates his optimal strategy after knowing the response of the opponent’s. They apply this technique to games similar to those we consider here. Their algorithm is likely to prove a powerful technique for solving otherwise intractable search games.

More generally, search games are discussed in Alpern and Gal (2006) and search and pursuit problems related to robotics are categorized and discussed in Chung, Hollinger and Isler (2011).

3 Single Period Game with General Search Times

Consider a game where the searcher wishes to find the hider at one of locations and then attempt to pursue and capture it, within a limited amount of resources denoted by . Each location has two associated parameters: a search time required to search the location and a capture probability that if found at location the searcher’s pursuit will be successful. Both and are known to the searcher and the hider.

The game where and

represent the time and capture vectors, is played as follows. The hider picks a location

in which to hide. The searcher can then inspect search locations in any order, as long as their total search time does not exceed . The searcher wins (payoff = 1) if he finds and then captures the hider; otherwise the hider wins (payoff = 0). We can say that this game is a constant sum game where the value is the probability that the predator wins with given total search time .
A mixed strategy for the hider is a distribution vector where

A pure strategy for the searcher is a set of locations which can be searched in total time His pure strategy set is denoted by where

The statement above simply states that a searcher can inspect any set of locations for which the total search time does not exceed his maximum search time A mixed search strategy is a probabilistic choice of these sets.

The payoff from the perspective of the maximizing searcher is given by

As part of the analysis of the game, we may wish to consider the best response problem faced by a searcher who knows the distribution of the hider. The ”benefit” of searching each location is given by , the probability that he finds and then captures the hider (prey). Thus when is known, the problem for the searcher essentially is to choose the set of locations which maximizes . This is a classic Knapsack problem from the Operations Research literature. The objects to be put into the knapsack are the locations Each has a ‘weight’ and a benefit . He wants to fill the knapsack with as much total benefit subject to a total weight restriction of

The knapsack approach illustrates a simple domination argument: the searcher should never leave enough room (time) in his knapsack to put in another object. We write this simple observation as follows.

Lemma 1

Fix The set is weakly dominated by the set if and there is a location .

Proof. If in both or is not in then If then   

3.1 An example

To illustrate the general game we consider an example with locations. The search times are given by and the respective capture probabilities are given by In this example it is easiest to name the locations by their search time, so for example the capture probability at location is The searcher has total search time given by so he can search any single location or the pair The singleton sets and are both dominated by We put the associated capture time next to the name of each location. Thus the associated reduced matrix game is simply

Alocation 7
.1 0 0 0
0 0 0 .4
0 .2 .15 0

Solving the matrix game shows that the prey hides in the four locations with probabilities while the searcher inspects with probability with probability and with probability The value of the game, that is, the probability that the predator-searcher finds and captures the prey-hider, is Our approach in this paper is not to solve games in the numerical fashion, but rather to give general solutions for certain classes of games, as Gal and Casas did for the games with

3.2 The game with constant

Choosing all the search times the same, say we may restrict to integers. This is the original game introduced and solved by Gal and Casas (2014). Since the are the same, we may order the locations by their capture probabilities, either increasing or decreasing. Here we use the increasing order of the original paper. Clearly if the hider will make sure that all the locations are equally good for the searcher constant) and if the hider knows he will be found so he will choose the location with the smallest capture probability (here location ). The nice result says that there is a threshold value for which divides the optimal hiding strategies into two extreme types. (The search strategies use concepts not relevant to this paper.)

Proposition 2 (Gal-Casas)

Consider the game where for all and the locations are ordered so that Define The value of this game is given by If then the unique optimal hiding distribution is If then the unique optimal hiding strategy is to hide at location

3.3 The game with decreasing, odd.

We now consider games with and decreasing. In some sense locations with higher indices are better for the hider in that they take up more search time and have a lower capture probability. Indeed if the searcher has enough resource to search all the locations then of course the hider should simply hide at location and keep the value down to In fact this solution obtains for much smaller values of as can be seen below in Table 1, where (rather than 15) suffices. Note that if the hider can win simply by hiding at location which takes time to search. We give a complete solution for the smallest nontrivial amount of resources (total search time) of

Proposition 3

Consider the game where is decreasing in and Then

  1. An optimal strategy for the searcher is to choose the set with probability for where

  2. An optimal strategy for the hider is to choose location with probability for and not to choose locations at all.

  3. The value of the game is .

Proof. Suppose the searcher adopts the strategy suggested above. Any location that the hider chooses belongs to one of the sets of the form for where the set denotes the set Since for we have and the are decreasing, the hider is better off choosing location In this case he is found with probability and hence he is captured with probability at least

Suppose the hider adopts the hiding distribution suggested above. Note that no pure search strategy can inspect more than one of the locations Suppose that location is inspected. Then the probability that the searcher finds and captures the hider is given by It follows that is the value of the game.

Corollary 4

Assuming the are strictly decreasing in the hider strategy given above is uniquely optimal, but the searcher strategy is not.

Proof. Let be a hiding distribution. We must have for some otherwise the total probability given by would be less than Against such a distribution suppose that the searcher inspects the two locations and Then the probability that the searcher wins is given by because

But by our previous estimate

this means the searcher wins with probability at least and hence is not optimal.

Next, consider the searcher strategy which gives the same probability as above for all the sets for but gives some of the probability assigned to to the set Let’s say the probability of is a small positive number The total probability of inspecting location (and all larger locations) has not changed. The probability of inspecting location has gone down by . So the only way the new searcher strategy could fail to be optimal is potentially when the hider chooses location . In this case the probability that the searcher wins is given by

Comparing this to the value of the game, we consider the difference

Since the first term on the right is positive because the difference will be positive for sufficiently small positive   

We will now consider an example to show how the solution changes as goes up from the solved case of to the level where the hider just goes to location To determine that level we use the following idea.

Proposition 5

The game has value if and only if the value of the game (with the last location removed and resources reduced by ) is at least

Proof. Suppose Every search set with positive probability must include location otherwise simply hiding there implies So the remaining part of every search set has With this amount of resources, the searcher must find the hider in the first locations with probability at least which is what is stated in the Proposition. Otherwise, the searcher will either have to not search location certainly (which gives ) or not search the remaining locations with enough resources to ensure   

Example: Consider the example where with Here . The game with and has value at least because of the equiprobable search strategy of and Here each location in the new game is inspected with the same probability 1/2 and consequently the best the hider can do is to hide in the best location 4, and then the searcher wins with probability If follows from Proposition 5 that the original game has the minimum possible value of .

To illustrate Proposition 3 and Corollary 4, we consider the above game where and The game matrix, excluding dominated search strategies, is given by

Alocation 1 2 3 4 5
0 0 0 0 .1
.5 0 0 .2 0
0 .4 .3 0 0
.5 0 .3 0 0
.5 .4 0 0 0

The unique solution for the optimal hiding distribution is and the value is . The optimal search strategy mentioned in Proposition 3 is to play and with respective probabilities and Another strategy is to play and the same but to play and with probabilities and It is of interest to see how the solution of the game changes when increases from to higher values. We know that we need go no higher than from Proposition 5 because in the game on locations to with the searcher can inspect with probability and with probability to ensure winning with probability at least

So we know the solution of the game for and The following table gives the value of the game and the unique optimal hiding distribution for these and intermediate values. (The optimal search strategies are varied and we don’t list them, though they are easily calculated.)

Value
0 0 2/11 3/11 6/11
0 0 2/11 3/11 6/11
0 0 0 1/3 2/3
0 0 0 1/3 2/3
0 3/37 4/37 6/37 24/37
0 0 0 0 1

We know that the value must be nondecreasing in but we see that it is not strictly increasing. Roughly speaking (but not precisely), the hider restricts towards fewer and better locations as increases, staying always at the best location 5 for However there is the anomalous distribution for which includes sometime hiding at location 2.

3.4 Game with two types of locations

Suppose there are two types of locations (hiding places). Type 1 takes time (this is a normalization) to search, while type 2 takes time to search, with being an integer. Now let type 1 locations have capture probability while type 2 locations have capture probability . Moreover, suppose there are locations of type 1 and locations of type 2. The searcher has total search time To simplify our results we assume that is small enough such that (the searcher can restrict all his searches to type 1) and (he can also restrict all his searches to type 2 locations).

Let be the maximum number of type 2 locations that can be searched. The searcher’s strategies are to search type 2 locations (and hence locations of type 1). Since all locations of a given type are essentially the same, the decision for the hider is simply the probability to hide at a randomly chosen location of type 1 (and hence hide at a randomly chosen location of type 2 with probability

Then the probability that the searcher wins the game is given by

This will be independent of the searcher’s strategy if

For , the capture probability is given by

By playing , the hider ensures that the capture probability (payoff) does not exceed .

We now consider how to optimize the searcher’s strategy.  Suppose the searcher searches locations of type 2 with probability , . If the hider is at a type 2 location then he is captured with probability

is the mean number of searches at type 2 locations. Similarly, if the hider is at a type 1 location, the hider is captured with probability

It follows that the capture probability will be the same for hiding at either location if we have

So for any probability distribution over the pure strategies

with mean , the probability of capturing a hider located either at a type 1 or a type 2 location is given by

To summarize, we have shown the following.

Proposition 6

Suppose all the hiding locations are of two types:  locations of type 1 with search time 1 and capture probability ;  locations of type 2 with search time and capture probability . Suppose  and  are large enough so the Searcher can do all his searching at a single location type, that is, . Then the unique optimal strategy for the hider is to locate in a random type 1 location with probability and in a random type 2 location with probability . Note that this is independent of . A strategy for the Searcher which inspects locations of type 2 (and thus, for type 1) with probability is optimal if and only if the mean number , of type 2 locations inspected is given by . If this number is an integer, then the Searcher has an optimal pure strategy. The value of the game is given by .

4 Game Where Capture Probabilities are Unknown But Learned

In this section we determine how the players can learn the values of the capture probabilities over time, starting with some a priori values and increasing these at locations from which there have been successful escapes. This of course requires that the game is repeated. Here we consider the simplest model, just two rounds. So after a successful escape in the second round, we consider that the hider-prey has won the game (Payoff ). More rounds of repeated play are considered in Gal, Alpern,Casas (2015), but learning is not considered there.

To introduce the complication of learning the escape probabilities, we further simplify the model to two hiding locations, one of which may be searched in each of the two rounds. If the Hider is found at location ; he is captured with a probability (escapes with complementary probability ). There are two rounds. If the Hider is not found (Searcher looks in the wrong location) in either round, he wins and the payoff is 0: If the Hider is found and captured in either round, the Searcher wins and the payoff is 1: If the Hider is found but escapes in the first round, the game is played one more time and both players remember which location the Hider escaped from. If the Hider escapes in the second (final) round, he wins and the payoff is 0.

The novel feature here is that the capture probabilities must be learned over time. At each location, the capture probability is chosen by Nature before the start of the game, independently with probability 1/2 of being (high) and probability 1/2 and being (the low probability), with . In the biological scenario, this may be the general distribution of locations in a larger region in which it is easy or hard to escape from. A more general distribution is possible within our model, but this two point distribution is very easy to understand. If there is escape from location in the first round, then in the second round the probability that the capture probability at is goes down (to some value less than ). This is a type of Bayesian learning, which only takes place after an escape, and only at the location of the escape.

4.1 Normal form of the two-period learning game

We use the normal form approach, rather than a repeated game approach. A strategy for either player says where he will search/hide in the two periods (assuming the game goes to the second period). Due to the symmetry of the two locations, both player cannot but choose their first period search or hide locations randomly. Thus the players have two strategies: (random,same) and (random,different). If there is a successful escape from that location, they can either locate in the same location (strategy ) or the other location (strategy . This gives a simple two by two matrix game. In this subsection we calculate its normal form; in the next subsection we present the game solution.

First we compute the payoff for the strategy pair : Half the time they go to different locations in first period, in which case the hider wins and the payoff is 0. So we ignore this, put in a factor of (1/2), and assume they go to the same location in the first period. There is only one location to consider, suppose it has escape probability . Then, as they both go back to this location in the second period if the hider escapes in the first period, the expected payoff is given by

(1)

Since takes values and equiprobably we have

(2)

It is worth noting two special cases: If both escape probabilities are 1 (escape is certain), the the hider always wins and the payoff is If both escape probabilities are then the searcher wins if and only if they both choose the same location, which has probability

Next we consider the strategy pair . Here we can assume they both go to location 1 in the first period (hence we add the factor of 1/2) and location 2 in the second period. The escape probabilities at these ordered locations 1 and 2 can be any of the following: . The first two are straightforward as it is the same as going to the same location twice (already calculated in (2)). We list the calculation of the four ordered hiding location below, where is given in (1).

Taking the average of these four values gives,

(3)

Now consider the strategy pair . If they go to different locations in the first period, the game ends with payoff 0. So again, we put in factor of 1/2 and assume they go to same location in first period. This means that if an escape happens in the first period, the hider wins (payoff 0) in the second period. So the probability the searcher wins is

(4)

Thus, we have completed the necessary calculations and the game matrix for for the strategy pairs and , with Searcher as the maximizer.

To solve this game, we begin with the game matrix from the Section 4.1.

This follows from the simple formula for the value of diagonal matrix games, We also know that in this diagonal game, players adopt each strategy with a probability inversely proportional to its diagonal element,

(5)

Hence the value of our game matrix is given by

We can now see that, as expected, a successful escape from a location makes that location more attractive to the hider as a future hiding place. This is confirmed in the following.

Proposition 7

In the learning game when after a successful escape both players should go back to the same location with probability greater than 1/2.

Proof. Let and denote, as above, the diagonal elements of We have

This means that and Hence by the observation (5) the strategy should be played with a higher probability than (probability in particular with probability more than   

4.2 An example with and

A simple example is when the low escape probability is and the high escape probability is : This give the matrix as


with value , and where each of the player optimally plays with probability and with probability .

Suppose there is an escape in the first period at say location 1. Then in the second period the Hider goes to location 1 with probability 9/17. Since the subjective probability of capture at location 2, from the point of view of either player, remains unchanged at (1/3 + 2/3) /2 = 1/2; this corresponds to a certain probability at location 1, that is, a matrix


We then have that

This corresponds to the probability of escape probability of , where

Thus, based on the escape at location 1 in the first period, the subjective probability that the escape probability there is 1/3 has gone up from the initial value of 1/2 to the higher value of 2/3.

5 Summary

The breakthrough paper of Gal and Casas (2014) gave us a model in which both the search and pursuit elements of predator-prey interactions could be modeled together in a single game. In that paper the capture probabilities depended on the hiding location but the time required to search a location was assumed to be constant. In the first part of this paper, we drop that simplifying assumption. We first consider a scenario in which the search times increase in while the capture probabilities decrease. We solve this game for the case of a particular total search time of the searcher. We then consider a scenario where there are many hiding locations but they come in only two types. Locations of each type are identical in that they have the same search times and the same capture probabilities. We solve the resulting search-pursuit game.

In the second part of the paper we deal with the question of how the players (searcher-predator and hider-prey) learn the capture probabilities of the different locations over time. We adopt a simple Bayesian approach. After a successful escape from a given location, both players update their subjective probabilities that it is a location with low or high capture probability; the probability that it is low obviously increases. In the game formulation, the players incorporate into their plan the knowledge that if there is an escape, then that location becomes more favorable to the hider in the next period.

While this paper gives a formal mathematical analysis of some search-pursuit problems, we believe it will also have applications in actual predator-prey scenarios.

References