1 Introduction
We consider Facility Location games, where facilities are placed on the real line based on the preferences of strategic agents. Such problems are motivated by natural scenarios in Social Choice, where a local authority plans to build a fixed number of public facilities in an area (see e.g., [40]). The choice of the locations is based on the preferences of local people, or agents. Each agent reports her ideal location, and the local authority applies a (deterministic or randomized) mechanism that maps the agents’ preferences to facility locations.
Each agent evaluates the mechanism’s outcome according to her connection cost, i.e., the distance of her ideal location to the nearest facility. The agents seek to minimize their connection cost and may misreport their ideal locations in an attempt of manipulating the mechanism. Therefore, the mechanism should be strategyproof, i.e., it should ensure that no agent can benefit from misreporting her location, or even group strategyproof, i.e., resistant to coalitional manipulations. The local authority’s objective is to minimize the social cost, namely the sum of agent connections costs. In addition to allocating the facilities in a incentive compatible way, which is formalized by (group) strategyproofness, the mechanism should result in a socially desirable outcome, which is quantified by the mechanism’s approximation ratio to the optimal social cost.
Since Procaccia and Tennenholtz [42] initiated the research agenda of approximate mechanism design without money, Facility Location has served as the benchmark problem in the area and its approximability by deterministic or randomized strategyproof mechanisms has been studied extensively in virtually all possible variants and generalizations. For instance, previous work has considered multiple facilities on the line (see e.g., [27, 29, 32, 37, 41]) and in general metric spaces [26, 36]), different objectives (e.g., social cost, maximum cost, the norm of agent connection costs [23, 42, 29]), restricted metric spaces more general than the line (cycle, plane, trees, see e.g., [2, 17, 25, 31, 39]), facilities that serve different purposes (see e.g., [34, 35, 48]), and different notions of private information about the agent preferences that should be declared to the mechanism (see e.g., [16, 21, 38] and the references therein).
Due to the significant research interest in the topic, the fundamental and most basic question of approximating the optimal social cost by strategyproof mechanisms for Facility Location on the line has been relatively wellunderstood. For a single facility (), placing the facility at the median location is group strategyproof and optimizes the social cost. For two facilities (), the best possible approximation ratio is and is achieved by a natural group strategyproof mechanism that places the facilities at the leftmost and the rightmost location [27, 42]. However, for three or more facilities (), there do not exist any deterministic anonymous^{1}^{1}1A mechanism is anonymous if its outcome depends only on the agent locations, not on their identities. strategyproof mechanisms for Facility Location with a bounded (in terms of and ) approximation ratio [27]. On the positive side, there is a randomized anonymous group strategyproof mechanism^{2}^{2}2The result of [29] applies to the more general setting where the agent connection cost is a nondecreasing concave function of the distance to the nearest facility. with an approximation ratio of [29] (see also Section 1.1 for a selective list of additional references).
Perturbation Stability in Facility Location Games. Our work aims to circumvent the strong impossibility result of [27] and is motivated by the recent success on the design of polynomialtime exact algorithms for perturbation stable clustering instances (see e.g., [3, 9, 10, 11, 44, 45]). An instance of a clustering problem, like Facility Location (a.k.a. median in the optimization and approximation algorithms literature), is perturbation stable (or simply, stable), for some , if the optimal clustering is not affected by scaling down any subset of the entries of the distance matrix by a factor at most . Perturbation stability was introduced by Bilu and Linial [12] and Awasthi, Blum and Sheffet [7] (and has motivated a significant volume of followup work since then, see e.g., [3, 9, 11, 45] and the references therein) in an attempt to obtain a theoretical understanding of the superior practical performance of relatively simple clustering algorithms for well known hard clustering problems (such as Facility Location in general metric spaces). Intuitively, the optimal clusters of a stable instance are somehow well separated, and thus, relatively easy to identify (see also the main properties of stable instances in Section 3). As a result, natural extensions of simple algorithms, like singlelinkage (a.k.a. Kruskal’s algorithm), can recover the optimal clustering in polynomial time, provided that [3], and standard approaches, like dynamic programming (resp. local search), work in almost linear time for (resp. ) [1].
In this work, we investigate whether restricting our attention to stable instances allows for improved strategyproof mechanisms with bounded (and ideally, constant) approximation guarantees for Facility Location on the line, with . We note that the impossibility results of [27] crucially depend on the fact that the clustering (and the subsequent facility placement) produced by any deterministic mechanism with a bounded approximation ratio must be sensitive to location misreports by certain agents (see also Section 6). Hence, it is very natural to investigate whether the restriction to stable instances allows for some nontrivial approximation guarantees by deterministic or randomized strategyproof mechanisms for Facility Location on the line.
To study the question above, we adapt to the real line the stricter^{3}^{3}3The notion of metric stability is “stricter” than standard stability in the sense that the former excludes some perturbations allowed by the latter. Hence, the class of metric stable instances includes the class of stable instances. More generally, the stricter a notion of stability is, the larger the class of instances qualified as stable, and the more general the positive results that one gets. Similarly, for any , the class of (metric) stable instances includes the class of (metric) instances. Hence, a smaller value of makes a positive result stronger and more general. notion of metric stability [3], where the definition also requires that the distances form a metric after the perturbation. In our notion of linear stability, the instances should retain their linear structure after a perturbation. Hence, a perturbation of a linear Facility Location instance is obtained by moving any subset of pairs of consecutive agent locations closer to each other by a factor at most . We say that a Facility Location instance is stable, if the original instance and any perturbation of it admit the same unique optimal clustering^{4}^{4}4As for the optimal centers, in case of ties, the center of an optimal cluster is determined by a fixed deterministic tiebreaking rule, e.g., the center is always the left median point of the cluster. (see also Definition 1).
Interestingly, for sufficiently large, stable instances of Facility Location have additional structure that one could exploit towards the design of strategyproof mechanisms with good approximation guarantees (see also Section 3). E.g., each agent location is times closer to the nearest facility than to any location in a different cluster (Proposition 1). Moreover, for , the distance between any two consecutive clusters is larger than their diameter (Lemma 1).
From a conceptual viewpoint, our work is motivated by a reasoning very similar to that discussed by Bilu, Daniely, Linial and Saks [13] and summarized in “clustering is hard only when it doesn’t matter” by Roughgarden [46]. In a nutshell, we expect that when public facilities (such as schools, libraries, hospitals, representatives) are to be allocated to some communities (e.g., cities, villages or neighborhoods, as represented by the locations of agents on the real line) the communities are already well formed, relatively easy to identify and difficult to radically reshape by small distance perturbations or agent location misreports. Moreover, in natural practical applications of Facility Location games, agents tend to misreport “locally” (i.e., they tend to declare a different ideal location in their neighborhood, trying to manipulate the location of the local facility), which usually does not affect the cluster formation. In practice, this happens because the agents do not have enough knowledge about locations in other neighborhoods, and because “large nonlocal” misreports are usually easy to identify by combining publicly available information about the agents (e.g., occupation, address, habits, lifestyle). Hence, we believe that the class of stable instances, especially for relatively small values of , provides a reasonably accurate abstraction of the instances of Facility Location games that a mechanism designer is more likely to deal with in practice. Thus, we feel that our work takes a small first step towards justifying that (not only clustering but also) strategyproof facility location is hard only when it doesn’t matter.
Contributions and Techniques. Our conceptual contribution is that we initiate the study of efficient (wrt. their approximation ratio for the social cost) strategyproof mechanisms for the large and natural class of stable instances of Facility Location on the line. Our technical contribution is that we show the existence of deterministic (resp. randomized) strategyproof mechanisms with a bounded (resp. constant) approximation ratio for stable instances and any number of facilities . Moreover, we show that the optimal solution is strategyproof for stable instances, if the optimal clustering does not include any singleton clusters (which is likely to be the case in virtually all practical applications). To provide evidence that restriction to stable instances does not make the problem trivial, we strengthen the impossibility result of Fotakis and Tzamos [27], so that it applies to stable instances, with . Specifically, we show that that for any and any , there do not exist any deterministic anonymous strategyproof mechanisms for Facility Location on stable instances with bounded (in terms of and ) approximation ratio.
At the conceptual level, we interpret the stability assumption as a prior on the class of true instances that the mechanism should be able to deal with. Namely, we assume that the mechanism has only to deal with stable true instances, a restriction motivated by (and fully consistent with) how the stability assumption is used in the literature on efficient algorithms for stable clustering (see e.g., [3, 9, 11, 12], where the algorithms are analyzed for stable instances only). More specifically, our mechanisms expect as input a declared instance such that in the optimal clustering, the distance between any two consecutive clusters is at least times larger than the diameters of the two clusters (a.k.a. clusterseparation property, see Lemma 1). This condition is necessary (but not sufficient) for stability and can be easily checked. If the declared instance does not satisfy the clusterseparation property, our mechanisms do not allocate any facilities. Otherwise, our mechanisms allocate facilities (even if the instance is not stable). We prove that for all stable true instances (with the exact stability factor depending on the mechanism), if agents can only deviate so that the declared instance satisfies the clusterseparation property (and does not have singleton clusters, for the optimal mechanism), our mechanisms are strategyproof and achieve the desired approximation guarantee. Hence, if we restrict ourselves to stable true instances and to agent deviations that do not obviously violate stability, our mechanisms should only deal with stable declared instances, due to strategyproofness. On the other hand, if nonstable true instances may occur, the mechanisms cannot distinguish between a stable true instance and a declared instance, which appears to be stable, but is obtained from a nonstable instance through location misreports.
The restriction that the agents of a stable instance are only allowed to deviate so that the declared instance satisfies the clusterseparation property (and does not have any singleton clusters, for the optimal mechanism) bears a strong conceptual resemblance to the notion of strategyproof mechanisms with local verification (see e.g., [6, 4, 14, 15, 28, 30, 33]), where the set of each agent’s allowable deviations is restricted to a socalled correspondence set, which typically depends on the agent’s true type, but not on the types of the other agents. Instead of restricting the correspondence set of each individual agent independently, we impose a structural condition on the entire declared instance, which restricts the set of the agents’ allowable deviations, but in a global and observable sense. As a result, we can actually implement our notion of verification, by checking some simple properties of the declared instance, instead of just assuming that any deviation outside an agent’s correspondence set will be caught and penalized (which is the standard approach in mechanisms with local verification [4, 15, 14], but see e.g., [6, 26] for noticeable exceptions).
On the technical side, we start, in Section 3, with some useful properties of stables instances of Facility Location on the line. Among others, we show (i) the clusterseparation property (Lemma 1), which states that in any stable instance, the distance between any two consecutive clusters is at least times larger than their diameters; and (ii) the socalled no direct improvement from singleton deviations property (Lemma 2), i.e., that in any stable instance, no agent who deviates to a location, which becomes a singleton cluster in the optimal clustering of the resulting instance, can improve her connection cost through the facility of that singleton cluster.
In Section 4, we show that for stable instances whose optimal clustering does not include any singleton clusters, the optimal solution is strategyproof (Theorem 4.1). For the analysis, we observe that since placing the facility at the median location of any fixed cluster is strategyproof, a misreport cannot be profitable for an agent, unless it results in a different optimal clustering. The key step is to show that for stable instances without singleton clusters, a profitable misreport cannot change the optimal clustering, unless the instance obtained from the misreport violates the clusterseparation property. To the best of our knowledge, the idea of penalizing (and thus, essentially forbidding) a whole class of potentially profitable misreports by identifying how they affect a key structural property of the original instance, which becomes possible due to our restriction to stable instances, has not been used before in the design of strategyproof mechanisms for Facility Location (see also the discussion above about resemblance to mechanisms with verification).
We should also motivate our restriction to stable instances without singleton clusters in their optimal clustering. So, let us consider the rightmost agent of an optimal cluster in a stable instance . No matter the stability factor , it is possible that performs a socalled singleton deviation. Namely, deviates to a remote location (potentially very far away from any location in ), which becomes a singleton cluster in the optimal clustering of the resulting instance. Such a singleton deviation might cause cluster to merge with (possibly part of the next) cluster , which in turn, might bring the median of the new cluster much closer to (see also Fig. 1 in Section 3). It is not hard to see that if we stick to the optimal solution, where the facilities are located at the median of each optimal cluster, there are stable instances^{5}^{5}5E.g., let and consider the stable instance , for any . Then, the agent at location can decrease its connection cost (from ) to by deviating to location ., with arbitrarily large , where some agents can deviate to a remote location and gain, by becoming singleton clusters, while maintaining the desirable stability factor of the declared instance (see also Fig. 1).
To deal with singleton deviations^{6}^{6}6Another natural way to deal efficiently with singleton deviations is through some means of location verification, such as winnerimposing verification [26] or symmetric verification [30, 28]. Adding e.g., winnerimposing verification to the optimal mechanism, discussed in Section 4, results in a strategyproof mechanism for stable instances whose optimal clustering may include singleton clusters., we should place the facility either at a location close to an extreme one, as we do in Section 5 with the AlmostRightmost mechanism, or at a random location, as we do in Section 7 with the Random mechanism. More specifically, in Section 5, we show that the AlmostRightmost mechanism, which places the facility of any nonsingleton optimal cluster at the location of the second rightmost agent, is strategyproof for stable instances of Facility Location (even if their optimal clustering includes singleton clusters) and achieves an approximation ratio at most (Theorem 5.1). Moreover, in Section 7, we show that the Random mechanism, which places the facility of any optimal cluster at a location chosen uniformly at random, is strategyproof for stable instances (again even if their optimal clustering includes singleton clusters) and achieves an approximation ratio of (Theorem 7.1).
To obtain a deeper understanding of the challenges behind the design of strategyproof mechanisms for stable instances of Facility Location on the line, we strengthen the impossibility result of [27, Theorem 3.7] so that it applies to stable instances with (Section 6). Through a careful analysis of the image sets of deterministic strategyproof mechanisms, we show that for any , any , and any , there do not exist any approximate deterministic anonymous strategyproof mechanisms for stable instances of Facility Location on the line (Theorem 6.1). The proof of Theorem 6.1 requires additional ideas and extreme care (and some novelty) in the agent deviations, so as to only consider stable instances, compared against the proof of [27, Theorem 3.7]. Interestingly, singleton deviations play a crucial role in the proof of Theorem 6.1.
1.1 Other Related Work
Approximate mechanism design without money for variants and generalizations of Facility Location games on the line has been a very active and productive area of research in the last decade.
Previous work has shown that deterministic strategyproof mechanisms can only achieve a bounded approximation ratio for Facility Location on the line, only if we have at most facilities [27, 42]. Notably, stable (called wellseparated in [27]) instances with agents play a key role in the proof of inapproximability of Facility Location by deterministic anonymous strategyproof mechanisms [27, Theorem 3.7]. On the other hand, randomized mechanisms are known to achieve a better approximation ratio for facilities [37], a constant approximation ratio if we have facilities and only agents [19, 29], and an approximation ratio of for any [29]. Fotakis and Tzamos [26] considered winnerimposing randomized mechanisms that achieve an approximation ratio of for Facility Location in general metric spaces. In fact, the approximation ratio can be improved to , using the analysis of [5].
For the objective of maximum agent cost, Alon et al. [2] almost completely characterized the approximation ratios achievable by randomized and deterministic strategyproof mechanisms for Facility Location in general metrics and rings. Fotakis and Tzamos [29] presented a approximate randomized group strategyproof mechanism for Facility Location on the line and the maximum cost objective. For Facility Location on the line and the objective of minimizing the sum of squares of the agent connection costs, Feldman and Wilf [23] proved that the best approximation ratio is for randomized and for deterministic mechanisms. Golomb and Tzamos [32] presented tight (resp. almost tight) additive approximation guarantees for locating a single (resp. multiple) facilities on the line and the objectives of the maximum cost and the social cost.
Regarding the application of perturbation stability, we follow the approach of beyond worstcase analysis (see e.g., [44, 45]), where researchers seek a theoretical understanding of the superior practical performance of certain algorithms by formally analyzing them on practically relevant instances. The beyond worstcase approach is not anything new for Algorithmic Mechanism Design. Bayesian analysis, where the bidder valuations are drawn as independent samples from a distribution known to the mechanism, is standard in revenue maximization when we allocate private goods (see e.g., [43]) and has led to many strong and elegant results for social welfare maximization in combinatorial auctions by truthful posted price mechanisms (see e.g., [18, 22]). However, in this work, instead of assuming (similar to Bayesian analysis) that the mechanism designer has a relatively accurate knowledge of the distribution of agent locations on the line (and use e.g., an appropriately optimized percentile mechanism [49]), we employ a deterministic restriction on the class of instances (namely, perturbation stability), and investigate if deterministic (resp. randomized) strategyproof mechanisms with a bounded (resp. constant) approximation ratio are possible for locating any number facilities on such instances. To the best of our knowledge, the only previous work where the notion of perturbation stability is applied to Algorithmic Mechanism Design (to combinatorial auctions, in particular) is [24] (but see also [8, 20] where the similar in spirit assumption of endowed valuations was applied to combinatorial markets).
2 Notation, Definitions and Preliminaries
We let . For any , we let be the distance of locations and on the real line. For a tuple , we let denote the tuple without coordinate . For a nonempty set of indices, we let and . We write to denote the tuple with in place of , to denote the tuple with in place of and in place of
, and so on. For a random variable
, denotes the expectation of . For an event in a sample space,denotes the probability that
occurs.Instances. We consider Facility Location with facilities and agents on the real line. We let be the set of agents. Each agent resides at a location , which is ’s private information. We usually refer to a locations profile , , as an instance. By slightly abusing the notation, we use to refer both to the agent ’s location and sometimes to the agent (i.e., the strategic entity) herself.
Mechanisms. A deterministic mechanism for Facility Location maps an instance to a tuple , , of facility locations. We let denote the outcome of in instance , and let denote , i.e., the th smallest coordinate in . We write to denote that places a facility at location . A randomized mechanism maps an instance
to a probability distribution over
tuples .Connection Cost and Social Cost. Given a tuple , , of facility locations, the connection cost of agent wrt. , denoted , is . Given a deterministic mechanism and an instance , denotes the connection cost of agent wrt. the outcome of . If is a randomized mechanism, the expected connection cost of agent is . The social cost of a deterministic mechanism for an instance is . The social cost of a facility locations profile is . The expected social cost of a randomized mechanism on instance is
The optimal social cost for an instance is . For Facility Location, the optimal social cost (and the corresponding optimal facility locations profile) can be computed in time by standard dynamic programming.
Approximation Ratio. A mechanism has an approximation ratio of , if for any instance , . We say that the approximation ratio of is bounded, if is bounded from above either by a constant or by a (computable) function of and .
Strategyproofness. A deterministic mechanism is strategyproof, if no agent can benefit from misreporting her location. Formally, is strategyproof, if for all location profiles , any agent , and all locations , . Similarly, a randomized mechanism is strategyproof (in expectation), if for all location profiles , any agent , and all locations , .
Clusterings. A clustering (or clustering, if is not clear from the context) of an instance is any partitioning of into sets of consecutive agent locations. We index clusters from left to right. I.e., , , and so on. We refer to a cluster that includes only one agent (i.e., with ) as a singleton cluster. We sometimes use to highlight that we consider as a clustering of instance .
Two clusters and are identical, denoted , if they include the exact same locations. Two clusterings and of an instance are the same, if , for all . Abusing the notation, we say that a clustering of an instance is identical to a clustering of a perturbation of (see also Definition 1), if , for all .
We let and denote the leftmost and the rightmost agent of each cluster . Under this notation, , for all . Exploiting the linearity of instances, we extend this notation to refer to other agents by their relative location in each cluster. Namely, (resp. ) is the second agent from the left (resp. right) of cluster . The diameter of a cluster is . The distance of clusters and is , i.e., the minimum distance between a location and a location .
A facility locations (or centers) profile induces a clustering of an instance by assigning each agent / location to the cluster with facility closest to . Formally, for each , . The optimal clustering of an instance is the clustering of induced by the facility locations profile with minimum social cost.
The social cost of a clustering induced by a facility locations profile on an instance is simply , i.e., the social cost of for . We sometimes refer to the social cost of a clustering for an instance , without any explicit reference to the corresponding facility locations profile. Then, we refer to the social cost , where each facility is located at the median location of (the left median location of , if is even).
We often consider certain structural changes in a clustering due to agent deviations. Let be a clustering of an instance , which due to an agent deviation, changes to a different clustering . We say that cluster is split when changes to , if not all agents in are served by the same facility in . We say that is merged in , if all agents in are served by the same facility, but this facility also serves in some agents not in .
3 Perturbation Stability on the Line: Definition and Properties
Next, we introduce the notion of (linear) stability and prove some useful properties of stable instances of Facility Location, which are repeatedly used in the analysis of our mechanisms.
Definition 1 (Pertrubation and Stability)
Let be a locations profile. A locations profile is a perturbation of , for some , if and for every , . A Facility Location instance is perturbation stable (or simply, stable), if has a unique optimal clustering and every perturbation of has the same unique optimal clustering .
Namely, a perturbation of an instance is obtained by moving a subset of pairs of consecutive locations closer by a factor at most . A Facility Location instance is stable, if and any perturbation of admit the same unique optimal clustering (where clustering identity for and is understood as explained in Section 2). We consistently select the optimal center of each optimal cluster with an even number of points as the left median point of .
Our notion of linear perturbation stability naturally adapts the notion of metric perturbation stability [3, Definition 2.5] to the line. We note, the class of stable linear instances, according to Definition 1, is at least as large as the class of metric stable linear instances, according to [3, Definition 2.5]. Similarly to [3, Theorem 3.1] (see also [46, Lemma 7.1] and [7, Corollary 2.3]), we can show that for all , every stable instance , which admits an optimal clustering with optimal centers , satisfies the following center proximity property: For all cluster pairs and , with , and all locations , .
We repeatedly use the following immediate consequence of center proximity (see also [46, Lemma 7.2]). The proof generalizes the proof of [46, Lemma 7.2] to any .
Proposition 1
Let and let be any stable instance, with unique optimal clustering and optimal centers . Then, for all clusters and , with , and all locations and , .
The following observation, which allows us to treat stability factors multiplicatively, is an immediate consequence of Definition 1.
Observation 1
Every perturbation followed by a perturbation of a locations profile can be implemented by a perturbation and vice versa. Hence, a stable instance remains stable after a perturbation, with , is applied to it.
We next show that for large enough, the optimal clusters of a stable instance are wellseparated, in the sense that the distance of two consecutive clusters is larger than their diameters.
Lemma 1 (ClusterSeparation Property)
For any stable instance on the line with optimal clustering and all clusters and , with , .
The clusterseparation property of Lemma 1 was first obtained in [1] as a consequence of cluster proximity. For completeness, in Section 0.A, we present a different proof that exploits the linear structure of the instance. Setting , we get that:
Corollary 1
Let and let be any stable instance with unique optimal clustering (. Then, for all clusters and , with , .
The following is an immediate consequence of the clusterseparation property in Lemma 1.
Observation 2
Let be a Facility Location with a clustering such that for any two clusters and , . Then, if in the optimal clustering of , there is a facility at the location of some , no agent in is served by a facility at .
Next, we establish the socalled no direct improvement from singleton deviations property, used to show the strategyproofness of the AlmostRightmost and Random mechanisms. Namely, we show that in any stable instance, no agent deviating to a singleton cluster in the optimal clustering of the resulting instance can improve her connection cost through the facility of that singleton cluster. The proof is deferred to Appendix 0.B.
Lemma 2
Let be a stable instance with and optimal clustering and cluster centers , and let an agent and a location such that is a singleton cluster in the optimal clustering of the resulting instance . Then, .
The following shows that for stable instances , an agent cannot form a singleton cluster, unless she deviates by a distance larger than the diameter of her cluster in ’s optimal clustering.
Lemma 3
Let be any stable instance with and optimal clustering . Let be any agent and any location such that is a singleton cluster in the optimal clustering of instance , where has deviated to . Then, .
Proof (Sketch.)
Initially, we show that a clustering of instance , with , cannot be optimal and contain as a singleton cluster, unless some agent is clustered together with some agent in . To this end, we use the lower bound on the distance between difference clusters for stable instances show in Lemma 1. Then, using stability arguments, i.e. that the optimal clustering should not change for instance , even when we decrease, by a factor of , the distances between consecutive agents in , we show that in agents in experience an increase in cost of at least (notice that ). But the additional cost of serving from in clustering is at most , since and . Hence retaining clustering and serving location from would have a smaller cost than the supposedly optimal clustering . The complete proof follows by a careful case analysis and can be found in Appendix 0.C. ∎
4 The Optimal Solution is Strategyproof for Stable Instances
We next show that the Optimal mechanism, which allocates the facilities optimally, is strategyproof for stable instances of Facility Location whose optimal clustering does not include any singleton clusters. More specifically, in this section, we analyze Mechanism 1.
In general, due to the incentive compatibility of the median location in a single cluster, a deviation can be profitable only if it results in a clustering different from the optimal clustering of . For is sufficiently large, stability implies that the optimal clusters are well identified so that any attempt to alter the optimal clustering (without introducing singleton clusters and without violating the cluster separation property, which is necessary of stability) results in an increased cost for the deviating agent. We should highlight that Mechanism 1 may also “serve” nonstable instances that satisfy the cluster separation property. We next prove that the mechanism is stategyproof if the true instance is stable and its optimal clustering does not include any singleton clusters, when the agent deviations do not introduce any singleton clusters and not result in instances that violate the cluster separation property (i.e. are served by the mechanism) .
Theorem 4.1
The Optimal mechanism applied to stable instances of Facility Location without singleton clusters in their optimal clustering is strategyproof and minimizes the social cost.
Proof
We first recall some of the notation about clusterings, introduced in Section 2. Specifically, for a clustering of an instance with centers , the cost of an agent (or a location) is . The cost of a set of agents in a clustering is . Finally, the cost of an instance in a clustering is . This general notation allows us to refer to the cost of the same clustering for different instances. I.e, if is the optimal clustering of , then denotes the cost of instance in clustering (where we select the same centers as in clustering for ).
The fact that if Optimal outputs facilities, they optimize the social cost is straightforward. So, we only need to establish strategyproofness. To this end, we show the following: Let be any perturbation stable Facility Location instance with optimal clustering . For any agent and any location , let be the optimal clustering of the instance resulting from the deviation of from to . Then, if does not form a singleton cluster in , either , or there is an for which .
So, we let deviate to a location , resulting in with optimal clustering . Since is not a singleton cluster, it is clustered with agents belonging in one or two clusters of , say either in cluster or in clusters and . By optimally of and , the number of facilities serving in is no less than the number of facilities serving in . Hence, there is at least one facility in either or .
Wlog., suppose that a facility is allocated to an agent in in . By Corollary 1 and Observation 2, no agent in is served by a facility in in . Thus we get the following cases about what happens with the optimal clustering of instance :
 Case 1:

is not allocated a facility in : This can happen in one of two ways:
 Case 1a:

is clustered together with some agents from cluster and no facility placed in serves agents in in .
 Case 1b:

is clustered together with some agents from a cluster and at least one of the facilities placed in serve agents in in .
 Case 2:

is allocated a facility in . This can happen in one of two ways:
 Case 2a:

only serves agents that belong in (by optimality, must be the median location of the new cluster, which implies that either and only serves or ).
 Case 2b:

In , serves agents that belong in both and .
We next show that the cost of the original clustering is less than the cost of clustering in . Hence, mechanism Optimal would also select clustering for , which would make ’s deviation to nonprofitable. In particular, it suffices to show that:
Since ’s deviation to is profitable, . Hence, it suffices to show that:
(1)  
We first consider Case 1a and Case 2a, i.e., the cases where allocates facilities to agents of (between and ) that serve only agents in . Note that in case 2a, can also be located outside of and serve only . We can treat this case as Case 1a, since it is equivalent to placing the facility on and serving from there.
In Case 1a and Case 2a, we note that (1) holds if the clustering allocates a single facility to agents in , because the facility is allocated to the median of , hence , while , since is optimal for . So, we focus on the most interesting case where the agents in are allocated at least two facilities. We observe that (1) follows from:
(2)  
(3) 
To establish (2) and (3), we first consider the valid perturbation of the original instance where all distances between consecutive agent pairs to the left of (i.e. agents ) and between consecutive agent pairs to the right of (i.e. agents ) are scaled down by . By stability, the clustering remains the unique optimal clustering for the perturbed instance . Moreover, since agents in are not served by a facility in in and , and since all distances outside are scaled down by , while all distances within remain the same, the cost of the clusterings and for the perturbed instance is and , respectively. Using and , we obtain:
(4)  
(5) 
Moreover, if is served by at least two facilities in , the facility serving (and some agents of ) is placed at the median location of ’s cluster that contains . Wlog., we assume that lies on the left of the median of . Then, the decrease in the cost of due to the additional facility in is equal to the decrease in the cost of in , which bounds from below the total decrease in the cost of due to the additional facility in . Hence,
(6) 
We conclude Case 1a and Case 2a, by observing that (2) follows directly from (6) and (4).
Finally, we study Case 1b and Case 2b, i.e, the cases where some agents of are clustered with agents of in . Let and denote the clusters of including all agents of (i.e., ). By hypothesis, at least one of and contains an agent . Suppose this is true for the cluster . Then, , since by Corollary 1, for any , the distance of any agent outside to the nearest agent in is larger than ’s diameter. But since both and contain agents of , we have that . Therefore, and the clusterseparation property is violated. Hence the resulting instance is not stable and Mechanism 1 does not allocated any facilities for it. ∎
5 A Deterministic Mechanism Resistant to Singleton Deviations
Next, we present a deterministic strategyproof mechanism for stable instances of Facility Location whose optimal clustering may include singleton clusters. To make singleton cluster deviations non profitable, cluster merging has to be discouraged by the facility allocation rule. So, we allocate facilities near the edge of each optimal cluster, ending up with a significantly larger approximation ratio and a requirement for larger stability, in order to achieve strategyproofness. Specifically, we now need to ensure that no agent can become a singleton cluster close enough to her original location. Moreover, since agents can now gain by splitting their (true) optimal cluster, we need to ensure that such deviations are either non profitable or violate the clusterseparation property.