We consider the classic problem of locating a public facility on a real line or an interval, a canonical problem in mechanism design without money. In the standard version of this problem, there are agents, denoted by the set , and each agent has a preferred location for the public facility. The cost of an agent for a facility located at is given by , the distance from the facility to the agent’s ideal location, and the task in general is to locate a facility that minimizes some objective function. The most commonly considered objective functions are a) sum of costs for the agents and b) the maximum cost for an agent. In the mechanism design version of the problem, the main question is to see if the objective under consideration can be implemented, either optimally or approximately, in (weakly) dominant strategies.
While the standard version of the problem has received much attention, with several different variants like extensions to multiple facilities (e.g., procaccia13,lu10), looking at alternative objective functions (e.g., feldman13,cai16) etc. being extensively studied, the common assumption in this literature is that the agents are always precisely aware of their preferred locations on the real line (or the concerned metric space, depending on which variant is being considered). However, this might not always be the case and it is possible that the agents currently do not have accurate information on their ideal locations, or their preferences in general. To illustrate this, imagine a simple scenario where a city wants to build a school on a particular street (which we assume for simplicity is just a line) and aims to build one at a location that minimizes the maximum distance any of its residents have to travel to reach the school.. While each of the residents is able to specify which block they would like the school to be located at, some of them are unable to precisely pinpoint where on the block they would like it because, for example, they do not currently have access to information (like infrastructure data) to better inform themselves, or they are simply unwilling to put in the cognitive effort to refine their preferences further. Therefore, instead of giving a specific location , they end up giving an interval , intending to say “I know that I prefer the school to be built between the points and , but I am not exactly sure where I want it.”
The above described scenario is precisely the one we are concerned about in this paper. That is, in contrast to the standard setting of the facility location problem, we consider the setting in which the agents are uncertain (or partially informed) about their own true locations and the only information they have is that their preferred location , where for some parameter which models the amount of inaccuracy in the agents’ reports. Now, given such partially informed agents, our task is to look at the problem from the perspective of a designer whose goal is to design “robust” mechanisms under this setting. Here by “robust” we mean that, for a given performance measure and when considering implementation under an appropriate solution concept, the mechanism should provide good guarantees with respect to this measure for all the possible underlying unknown true locations of the agents. The performance measure we use here is based on the minimax regret solution criterion, which, informally, for a given objective function, , is an outcome that has the “best worst case”, or one that induces the least amount of regret after one realizes the true input.222We refer the reader to appendix A.1 for a discussion on the choice of regret as the performance measure. More formally, if denotes the set of all points where a facility can be located and
denotes the set of all the possible vectors that correspond to the true ideal locations of the agents, then the minimax optimal solution,, for some objective function (like the sum of costs or the maximum cost) is given by
where denotes the value of when evaluated with respect to and a point .
Thus, our aim is to design mechanisms that approximately implement the optimal minimax value (i.e., ) w.r.t. two objective functions—average cost and maximum cost—and under two solution concepts—very weak dominance and minimax dominance—that naturally extend to our setting (see Section 2 for definitions). In particular, we focus on deterministic and anonymous mechanisms that additively approximate the optimal minimax value, and our results are summarized in Table 1.
Before we move on to the rest of the paper, we anticipate that a reader might have some questions, especially w.r.t. our choice of performance measure and our decision to use additive as opposed to multiplicative approximations. We try to preemptively address these briefly in the section below.
1.1 Some Q & A.
Why regret? We argue below why this is a good measure by considering some alternatives.
Why not bound the ratio of the objective values of a) the outcome that is returned by the mechanism and b) the optimal outcome for that input? This, for instance, is the approach taken by chiesa12. In our case this is not a good measure because we can quickly see that this ratio is always unbounded in the worst-case.
Why not find a bound such that for all , where is the outcome of the mechanism and is the optimal solution associated with ? This, for instance, is the approach taken by chiesa14. Technically, this is essentially what we are doing when using max. regret. However, using regret is more informative because if we make a statement of the form , then this conveys two things: a) for any there is at least one such that , where (i.e. it gives us a sense on what is achievable at all—which in turn can be thought of as a natural lower bound) and b) the point chosen by the mechanism is at most -far from the optimal objective value for any . Hence, to convey these, we employ the notion of regret. We refer the reader to Appendix A.1 for a slightly more elaborate discussion.
Why additive approximations? We use additive as opposed to multiplicative approximations because one can see that when using the latter and w.r.t. the max. cost objective function both the solution concepts that we consider in this paper—which we believe are natural ones to consider in this setting—do not provide any insight into the problem as there are no bounded mechanisms. Again, we refer the reader to Appendix A.2 for a more elaborate discussion.
1.2 Related work
There are two broad lines of research that are related to the topic of this paper. The first is, naturally, the extensive literature that focuses on designing mechanisms in the context of the facility location problem and the second is the work done in mechanism design which considers settings where the agents do not completely specify their preferences. Below, beginning with the latter, we describe some of the papers that are most relevant to our work.
Designing mechanisms with incomplete preferences. A disproportionate amount of the work in mechanism design considers settings where the agents have complete information about their preferences. However, as one might expect, the issue of agents not specifying their complete preferences has been considered in the mechanism design literature and the papers that are most relevant to this paper are the series of papers by *chiesa12,chiesa14,chiesa15, and the works of hyafil07a,hyafil07. Below we briefly discuss about each of them.
The series of papers by chiesa12,chiesa14,chiesa15 considers settings where the agents are uncertain about their own types and they look at this model in the context of auctions. In particular, in their setting the only information agents have about their valuations is that it is contained in a set , where is any subset of the set of all possible valuations.333chiesa14 argue that their model is equivalent to the Knightian uncertainty model that has received much attention in decision theory (see related works section in [chiesa14] and the references therein). However, here we do not use the term Knightian uncertainty, but instead just say that the agents are partially informed. This is because, the notion we use here, which we believe is the natural one to consider in the context of our problem, is less general than the notion of Knightian uncertainty. Under this setting, chiesa12 look at single-item auctions and they provide several results on the fraction of maximum social welfare that can be achieved under implementation in very weakly dominant and undominated strategies; subsequently, chiesa14 study the performance of VCG mechanisms in the context of combinatorial auctions when the agents are uncertain about their own types and under undominated and regret-minimizing strategies; and finally, chiesa15 analyze the Vickrey mechanism in the context of multi-unit auctions and, again, when the agents are uncertain about their types, and in this case they essentially show that it achieves near-optimal performance (in terms of social welfare) under implementation in undominated strategies. The partial information model that we use in this paper is inspired by this series of papers. In particular, our prior-free and absolute worst-case approach under partial information is similar to the one taken by chiesa12,chiesa14,chiesa15 (although such absolute worst-case approaches are not uncommon and have been previously considered in many different settings). However, our work is also different from theirs in that, unlike auctions, the problem we consider falls within the domain of mechanism design without money and so their results do not carry over to our setting.
The other set of papers that are most relevant to the broad theme here is the work of hyafil07a,hyafil07 who considered the problem of designing mechanisms that have to make decisions using partial type information. Their focus is again on contexts where payments are allowed and in hyafil07a they mainly show that a class of mechanisms based on the minimax regret solution criterion achieves approximate efficiency under approximate dominant strategy implementation. In hyafil07 their focus is on automated mechanism design within the same framework. While the overall theme in both their works is similar to ours, i.e., to look at issues that arise when mechanisms only have access to partial information, the questions they are concerned with and the model used are different. For instance, in the context of the models used, whereas in ours and Chiesa et al.’s models the agents do not know their true types and are therefore providing partial inputs, to the best of our understanding, the assumption in the works of hyafil07a,hyafil07 is that the mechanism has access to partial types, but agents are aware of their true type. This subtle change in turn leads to the focus being on solution concepts that are different from ours.
In addition to the papers mentioned above, note that another way to model uncertain agents is to assume that each of them has a probability distribution which tells them the probability of a point being their ideal location. For instance, this is the model that is used by feige11 in the context of task scheduling. However, in our model the agents do not have any more information than that they are within some interval, which we emphasize is not equivalent to assuming that, for a given agent, every point in the its interval is equally likely to be its true ideal location.
Related work on the facility location problem. Starting with the work of moulin80 there has been a flurry of research looking at designing strategyproof mechanisms (i.e., mechanisms where it is a (weakly) dominant strategy for an agent to reveal her true preferences) for the facility location problem. These can be broadly divided into two branches. The first one consists of work, e.g., moulin80,barbera94,schummer02,masso11,dokow12, that focuses on characterizing the class of strategyproof mechanisms in different settings (see barbera01 and [Chapter 10]nisan07 for a survey on some of these results). The second branch consists of more recent papers which fall under the broad umbrella of approximate mechanism design without money, initially advocated by procaccia13, that focus on looking at how well a strategyproof mechanism can perform under different objective functions procaccia13,lu10,feldman13,fotakis16,feigenbaum16. Our paper, which tries to understand the performance of mechanisms under different solution concepts and objective functions when the agents are partially informed about their own locations, falls under this branch of the literature.
Recall that in the standard (mechanism design) version of the facility location problem there are agents, denoted by the set , and each agent has a true preferred444We often omit the term “preferred” and instead just say that is agent ’s location. location , for some fixed555Note that here we make the assumption that the domain under consideration is bounded instead of assuming that the agents can be anywhere on the real line. This is necessary only because we are focusing on additive approximations instead of the usual multiplicative approximations. (For a slightly more elaborate explanation, see the introduction section of the paper by golomb17.) constant . A vector , where , is referred to as a location profile and the cost of agent for a facility located at is given by (or equivalently, their utility is ), the distance from the facility to the agent’s location.666This particular utility function that is considered here is equivalent to the notion of symmetric single-peaked preferences that is often used in the economics literature (see, e.g., [masso11]). In general, the task in the facility location problem is to design mechanisms—which are, informally, functions that map location profiles to a point (or a distribution over points) in —that (approximately) implement the outcome associated with a particular objective function.
In the version of the problem that we are considering, each agent , although they have a true location , is currently unaware of their true location and instead only knows an interval such that . The interval , which we denote by , is referred to as the candidate locations of agent , and we use to denote the set of all possible candidate locations of agent (succinctly referred to as the set of candidate locations). Now, given a profile of the set of candidate locations , we have the following definition.
Definition 1 (-uncertain-facility-location-game).
For all , , and , a profile of the set of candidate locations is said to induce a -uncertain-facility-location-game if, for each , and (or in words, for each agent , their set of candidate locations can only have intervals of length at most ).
Remark: We refer to as the inaccuracy parameter. In general, when proving lower bounds we assume that the designer knows this as this only makes our results stronger, whereas for positive results we explicitly state what the designer knows about . Additionally, note that in the definition above if , then we have the standard facility location setting where the set of candidate locations associated with every agent is just a set of points in . For a given profile of candidate locations , we say that “the reports are exact” when, for each agent , is a single point and not an interval.
2.1 Mechanisms, solution concepts, and implementation
A (deterministic) mechanism in our setting consists of an action space , where is the action space associated with agent , and an outcome function which maps a profile of actions to an outcome in (i.e., ). A mechanism is said to be direct if, for all , , where is the set of all possible candidate locations of agent . For every agent , a strategy is a function , and we use and to respectively denote the set of all pure and mixed strategies of agent .
Since the outcome of a mechanism needs to be achieved in equilibrium, it remains to be defined what equilibrium solution concepts we consider in this paper. Below we define, in the order of their relative strengths, the two solution concepts that we use here. We note that the first (very weak dominance) was also used by chiesa12.
Definition 2 (very weak dominance).
In a mechanism , an agent with candidate locations has a very weakly dominant strategy if , and ,
In words, the above definition implies that for agent with candidate locations , it is always best for to play the strategy , irrespective of the actions of the other players and irrespective of which of the points in is her true location.
Definition 3 (minimax dominance).
In a mechanism , an agent with candidate locations has a minimax dominant strategy if and ,
Before we explain what the definition above implies, let be the outcome of the mechanism when agent plays strategy and all the others play some . Now, let us consider the term
which calculates agent ’s maximum regret (i.e., the absolute worst case loss agent will experience if and when she realizes her true location from her candidate locations) for playing and hence getting the output . Then, what the above definition implies is that for a regret minimizing agent with candidate locations , it is always best for to play , irrespective of the actions of the other players, as any other strategy results in an outcome with respect to which agent experiences at least as much maximum regret as she experiences with .
Remark: Note that both the solution concepts defined above can be seen as natural extensions of the classical (i.e., the usual mechanism design setting where the agents know their types exactly) weak dominance notion to our setting. That is, for all , if is a single point, then both of them collapse to the classical weak dominance notion.
As stated in the introduction, given a profile of candidate locations , we want the mechanism to “perform well” against all the possible underlying true locations of the agents, i.e., with respect to all the location profiles where . Hence, for a given objective function , we aim to design mechanisms that achieve a good approximation of the optimal minimax value, which, for , is denoted by and is defined as
where for a point , if denotes the value of the function when evaluated with respect to the vector and a point , then the maximum regret associated with for the instance is defined as
Throughout, we refer to the point as the optimal minimax solution for the instance .
Finally, now that we have our performance measure, we define implementation in very weakly dominant and minimax dominant strategies.
Definition 4 (Implementation in very weakly dominant (minimax dominant) strategies).
For a -uncertain-facility-location-game, we say that a mechanism implements -, for some and some objective function , in very weakly dominant (minimax dominant) strategies, if for some , where is a very weakly dominant (minimax dominant) strategy for agent with candidate locations ,
3 Implementing the average cost objective
In this section we consider the objective of locating a facility so as to minimize the average cost (sometimes succinctly referred to as avgCost and written as AC). While the standard objective in the facility location setting is to minimize the sum of costs, here, like in work of golomb17, we use average cost because since we are approximating additively, it is easy to see that in many cases a deviation from the optimal solution results in a factor of order coming up in the approximation bound. Hence, to avoid this, and to make comparisons with our second objective function, maximum cost, easier we use average cost.
In the standard setting where the agents know their true location, the average cost of locating a facility at a point is defined as , where is the location of agent . Designing even optimal strategyproof/truthful mechanisms in this case is easy since one can quickly see that the optimal location for the facility is the median of and returning the same is strategyproof. Note that, for some , when , the median is unique and is the -th largest element. However, when , the “median” can be defined as any point between (and including) the -th and -th largest numbers. As a matter of convention, here we consider the -th element to be the median. Hence, throughout, we always write that the median element is the -th element, where .
In contrast to the standard setting, for some and a corresponding -uncertain-facility-location-game, even computing what the minimax optimal solution for the average cost objective (see Equation 4) is is non-trivial, let alone seeing if it can be implemented with any of the solution concepts discussed in Section 2.1. Therefore, we start by stating some properties about the minimax optimal solution that will be useful when designing mechanisms. A complete discussion on how to find the minimax optimal solution when using the average cost objective, as well as the proofs for the lemmas stated in the next section, are in Appendix D.
3.1 Properties of the minimax optimal solution for avgCost
Given the candidate locations for all , where, for some , , consider the left endpoints associated with all the agents, i.e., the set . We denote the sorted order of these points as (throughout, by sorted order we mean sorted in non-decreasing order). Similarly, we denote the sorted order of the right endpoints, i.e., the points in the set , as . Next, we state the following lemma which gives a succinct formula for the maximum regret associated with a point (i.e., , where ; see Equation 3). As stated above, all the proofs for lemmas in the section appear in Appendix D.
Given a point , the maximum regret associated with for the average cost objective can be written as , where
, where is the smallest index such that and
, where is the largest index such that and .
Our next lemma states that the minimax optimal solution, , associated with the avgCost objective function is always in the interval .
If is the minimax optimal solution associated with the avgCost objective function, then .
Equipped with these properties, we are now ready to talk about implementation using the solution concepts defined in Section 2.1.
3.2 Implementation in very weakly dominant strategies
As discussed in Section 2.1, the strongest solution concept that we consider is very weak dominance, where for an agent , with candidate locations , strategy is very weakly dominant if it is always best for to play , irrespective of the actions of the other players and irrespective of which of the points in is her true location. While it is indeed a natural solution concept which extends the classical notion of weak dominance, we will see below in, Theorem 3, that it is too strong as no deterministic mechanism can achieve a better approximation bound than . This in turn implies that, among deterministic mechanisms, the naive mechanism which always, irrespective of the reports of the agents, outputs the point is the best one can do.
Given a , let be a deterministic mechanism that implements - in very weakly dominant strategies for a -uncertain-facility-location-game. Then, .
Let us assume for the sake of contradiction that for some . First, note that here we can restrict ourselves to direct mechanisms since Chiesa et al. showed that the revelation principle holds with respect to this solution concept [Lemma A.2]chiesa14. So, now, let us consider a scenario where the profile of true candidate locations of the agents are and let . Since reporting the true candidate locations is a very weakly dominant strategy in , this implies that for an agent and for all ,
where for some , .
Next, consider the profile of true candidate locations . Then, again, using the fact that reporting the truth is a very weakly dominant strategy, we have that for agent and for all ,
Now, let us consider , where and , and let . By repeatedly using the observation made above, we have that for , , , and ,
Next, it is easy to see that the minimax optimal solution associated with the profile is , whereas for the profile it is . Also, from Equation 8 we know that outputs the same point for both these profiles. So, if we assume without loss of generality that , this implies that for ,
This in turn contradicts our assumption that . ∎
Although one could argue that this result is somewhat expected given how Chiesa et al. also observed similar poor performance for implementation with very weakly dominant strategies in the context of the single-item auctions [Theorem 1]chiesa12, we believe that it is still interesting because not only do we observe a similar result in a setting that is considerably different from theirs, but this observation also reinforces their view that one would likely have to look beyond very weakly dominant strategies in settings like ours. This brings us to our next section, where we consider an alternative, albeit weaker, but natural, extension to the classical notion of weakly dominant strategies.
3.3 Implementation in minimax dominant strategies
In this section we move our focus to implementation in minimax dominant strategies and explore whether by using this weaker solution concept one can obtain a better approximation bound than the one obtained in the previous section. To this end, we first present a general result that applies to all mechanisms in our setting that are anonymous and minimax dominant, in particular showing that any such mechanism cannot be onto. Following this, we look at non-onto mechanisms and here we provide a mechanism that achieves a much better approximation bound than the one we observed when considering implementation in very weak dominant strategies.
Remark: Note that in this section we focus only on direct mechanisms. This is without loss of generality because, like in the case with very weakly dominant strategies, it turns out that the revelation principle holds in our setting for minimax dominant strategies. A proof of the same can be found in Appendix B.
Given a , let be a deterministic mechanism that is anonymous and minimax dominant for a -uncertain-facility-location-game. Then, cannot be onto.
Suppose this were not the case and there existed a deterministic mechanism that is anonymous, minimax dominant, and onto. First, note that if we restrict ourselves to profiles where every agent’s report is a single point (instead of intervals as in our setting), then must have fixed points such that for any profile of single reports , . This is so because, given the fact that is anonymous, onto, and minimax dominant in our setting, when restricted to the setting where reports are single points, is strategyproof, anonymous, and onto, and hence we know from the characterization result by [Corollary 2]masso11 that every such mechanism must have fixed points such that for any profile , , where is the most preferred alternative of agent (i.e., agent ’s peak; since the utility of agent for an alternative is defined as -, we know that the preferences of agent is symmetric single-peaked with the peak ).777The original statement by [Corollary 2]masso11 talks about mechanisms that are anonymous, strategyproof, and efficient. However, it is known that for strategyproof mechanisms in (symmetric) single-peaked domains, efficiency is equivalent to being onto (see, e.g., [nisan07, Lemma 10.1] for a proof).888It is worth noting that the characterization result by [Corollary 2]masso11 for mechanisms that are anonymous, stratgeyproof, and onto under symmetric single-peaked preferences is the same as Moulin’s characterization of the set of such mechanisms on the general single-peaked domain [moulin80, Theorem 1]).
Now, given the observation above, for , let us consider the smallest such that (if there is no such define if and otherwise) and consider the following input profile
where , , and .
In the profile , let and denote the agents who report and , respectively, and let . First, note that if denotes the profile where agent reports instead of and every other agent reports as in , then . Also, if denotes the profile where agent reports instead of and every other agent reports as in , then . Now, since and , this implies that , for if otherwise agent can deviate from by reporting instead (it is easy to see that this reduces agent ’s maximum regret), thus violating the fact that is minimax dominant. Given this, consider the profile which is the same as except for the fact that agent reports instead of . By again using the same line of reasoning as in the case of , it is easy to see that . However, this in turn implies that agent can deviate from to , thus again violating the fact that is minimax dominant. ∎
Given the fact that we cannot have an anonymous, minimax dominant, and onto mechanism, the natural question to consider is if we can find non-onto mechanisms that perform well. We answer this question in the next section.
3.3.1 Non-onto mechanisms
In this section we consider non-onto mechanisms. We first show a positive result by presenting an anonymous mechanism that implements - in minimax dominant strategies. Following this, we present a conditional lower bound that shows that one cannot achieve an approximation bound better than when considering mechanisms that have a finite range.
An anonymous and minimax dominant mechanism.
Consider the -equispaced-median mechanism defined in Algorithm 1, which can be thought of as an extension to the standard median mechanism. The key assumption in this mechanism is that the designer knows a such that any agent’s candidate locations has a length at most . Given this , the key idea is to divide the interval into a set of “grid points” and then map every profile of reports to one of these points, while at the same time ensuring that the mapping is minimax dominant. In particular, in the case of the -equispaced-median mechanism, when , its range is restricted to the finite set of points such that, for , , , and .
Below we first prove a lemma where we show that the -equispaced-median mechanism is minimax dominant. Subsequently, we then use this to prove our main theorem which shows that the -equispaced-median mechanism implements - in minimax dominant strategies.
Given a and for every agent in a -uncertain-facility-location-game, reporting the candidate locations is a minimax dominant strategy for agent in the -equispaced-median mechanism.
Let us fix an agent and let be her candidate locations. Also, let be some arbitrary profile of candidate locations, where . We need to show that it is minimax dominant for agent to report in the -equispaced-median mechanism (denoted by from now on), and for this we broadly consider the following two cases. Intuitively, in both cases what we try to argue is that for an agent with candidate locations the that is associated with in the mechanism is in fact the agent’s “best alternative” among the alternatives in (see line 3 in Algorithm 1).
Case 1: . In this case, we show that it is a very weakly dominant strategy for agent to report . To see this, let be the output of when agent reports and be the point that is closest to in (with ties broken in favour of the point which is to the left of ). From line 8 in the mechanism, we know that . Now, if either or , then any report that changes the median will only result in the output being further away from agent . And if , then since we choose to be the point that is closest to in , it is clear that it is very weakly dominant for the agent to report . Hence, from both the cases above, we have our claim.
Case 2: . Let be the output of when agent reports , and let and be the points that are closest (with ties being broken in favour of points in in both cases) to and , respectively. From the mechanism we can see that . Next, let us first consider the case when or . In both these cases, given the fact that and are the points closest to and , respectively, has to be outside . And so, if this is the case, then if agent misreports and the output changes to some or , in both cases, it is easy to see that the maximum regret associated with is greater than the one associated with . Hence, the only case where an agent can successfully misreport is if . So, we focus on this scenario below.
Considering the scenario when , first, note that the interval can have at most three points that are also in (this is proved in Claim 4, which is in Appendix C). So, given this, let us now consider the following cases.
. Since , this implies that . Therefore, in this case, if agent misreports, then she only experiences a greater maximum regret as the resulting output would be outside and we know from our discussion above that these points have a greater maximum regret than a point in .
. First, note that since and the only other point in that is also in is , if or , then agent can only increase her maximum regret by misreporting and changing the outcome (because the new outcome will be outside ). Therefore, we only need to consider the case when , and here we consider the following sub-cases.
. From the mechanism we know that this happens only when either both and are in (lines 16-17) or when (lines 11-12). Now, since we know that every point outside is worse in terms of maximum regret than or , we only need to consider the case when agent misreports in such a way that it results in the new outcome being equal to . And below we show that under both the conditions stated above (lines 11-12 and 16-17, both of which result in being defined as being equal to in the mechanism) the maximum regret associated with is at least as much as the one associated with .
To see this, consider the profile where agent reports instead of and all the other agents’ reports are the same as in . Let be the output of for this profile. Note that since the associated with agent in this profile is and so the outcome in this profile is the same as . Similarly, let be the outcome when agent reports instead of . Since the associated with agent in this profile is , one can see that . Now, if , then , where is the maximum regret associated with the point for agent (see Equation 1 for the definition), and so is definitely better than . Therefore, we can ignore this case and instead assume that . So, considering this, since for the maximum regret calculations only the endpoints and matter (this is proved in Claim 5), we have that
(since and ) (depending on the case, use either the fact that or that ) (since and )
Hence, we see that in this case agent cannot gain by misreporting.
. We can show that in this case by proceeding similarly as in the case above.
. Let be the three points in . Since the length of is at most , note that both and cannot be in the interval . So below we assume without loss of generality that . From the mechanism we know that in this case (line 20). Also, as in the cases above, note that if or , then agent can only increase her maximum regret by misreporting and changing the outcome (because, again, the new outcome will be outside ). Therefore, the only case we need to consider is if , and for this case we show that both and have a maximum regret that is at least as much as that associated with (we do not need to consider points outside since we know that these points have a worse maximum regret than any of the points in ). Note that if this is true, then we are done as this shows that agent cannot benefit from misreporting.
To see why the claim is true, consider the profile where agent reports instead of and all the other agents’ reports are the same as in . Let be the outcome of for this profile. Similarly, let be the outcome when agent reports instead of . Note that and . Now, since, again, for the maximum regret calculations only the endpoints and matter (this is proved in Claim 5, which is in Appendix C), we have that
(since is the closest point to in and using very weak dominance for single reports) (since and by very weak dominance for single reports)
Similarly, we can show that . Hence, agent will not derive any benefit from misreporting her candidate locations.
Finally, combining all the cases above, we have that the -equispaced-median mechanism is minimax dominant. This concludes the proof of our lemma. ∎
Given the lemma above, we can now prove the following theorem.
Given a , the -equispaced-median mechanism is anonymous and implements - in minimax dominant strategies for a -uncertain-facility-location-game.
From Lemma 5 we know that the mechanism is minimax dominant. Therefore, the only thing left to show is that it achieves an approximation bound of . In order to do this, consider an arbitrary profile of candidate locations , where , and let and denote the sorted order of the left endpoints (i.e., ) and right endpoints (i.e.,