Constant Approximation for k-Median and k-Means with Outliers via Iterative Rounding

11/03/2017 ∙ by Ravishankar Krishnaswamy, et al. ∙ Carnegie Mellon University University at Buffalo 0

In this paper, we present a novel iterative rounding framework for many clustering problems. This leads to an (α_1 + ϵ≈ 7.08 + ϵ)-approximation algorithm for k-median with outliers, greatly improving upon the large implicit constant approximation ratio of Chen [Chen, SODA 08]. For k-means with outliers, we give an (α_2+ϵ≈ 53 + ϵ)-approximation, which is the first O(1)-approximation for this problem. The iterative algorithm framework is very versatile; we show how it can be used to give α_1- and (α_1 + ϵ)-approximation algorithms for matroid median and knapsack median problems respectively, improving upon the previous best approximations ratios of 8 [Swamy, ACM Trans. Algorithms] and 17.46 [Byrka et al, ESA 2015]. The natural LP relaxation for the k-median/k-means with outliers problem has an unbounded integrality gap. In spite of this negative result, our iterative rounding framework shows that we can round an LP solution to an almost-integral solution of small cost, in which we have at most two fractionally open facilities. Thus, the LP integrality gap arises due to the gap between almost-integral and fully-integral solutions. Then, using a pre-processing procedure, we show how to convert an almost-integral solution to a fully-integral solution losing only a constant-factor in the approximation ratio. By further using a sparsification technique, the additive factor loss incurred by the conversion can be reduced to any ϵ > 0.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Clustering is a fundamental task studied by several scientific communities (such as operations research, biology, and of course computer science and machine learning) due to the diversity of its applications. Of the many ways in which the task of clustering can be formalized, the

-median and -means problems are arguably the most commonly studied methods by these different communities, perhaps owing to the simplicity of their problem descriptions. In the -median (resp. -means) problem, we are given a set of clients, a set of potential facility locations, and a metric space , and the goal is to choose a subset of cardinality at most so as to minimize (resp. ), where . Both problems are NP-hard in the worst case, and this has led to a long line of research on obtaining efficient approximation algorithms. For the -median problem, the current best known factors are an upper bound of [10], and a lower bound of [27]. The -means problem, while originally not as well studied in the theoretical community, is now increasingly gaining attention due to its importance in machine learning and data analysis. The best known approximations are  [1] and  [1] for general and Euclidean metrics respectively, while the lower bounds are and  [32] for general and Euclidean metrics respectively. Both problems admit PTASs [2, 21, 19] on fixed-dimensional Euclidean metrics.

Despite their simplicity and elegance, a significant shortcoming these formulations face in real-world data sets is that they are not robust to noisy points, i.e., a few outliers can completely change the cost as well as structure of solutions. To overcome this shortcoming, Charikar et al. [12] introduced the robust -median (RkMed) problem (also called -median with outliers), which we now define.

Definition 1.1 (The Robust -Median and -Means Problems).

The input to the Robust -Median (RkMed) problem is a set of clients, a set of facility locations, a metric space , integers and . The objective is to choose a subset of cardinality at most , and a subset of cardinality at least such that the total cost is minimized. In the Robust -Means (RkMeans) problem, we have the same input and the goal is to minimize .

The problem is not just interesting from the clustering point of view. In fact, such a joint view of clustering and removing outliers has been observed to be more effective [15, 39]

for even the sole task of outlier detection, a very important problem in the real world. Due to these use cases, there has been much recent work 

[15, 23, 40] in the applied community on these problems. However, their inherent complexity from the theoretical side is much less understood. For RkMed, Charikar et al. [12] give an algorithm that violates the number of outliers by a factor of , and has cost at most times the optimal cost. Chen [16] subsequently showed a pure -approximation without violating or . However, Chen’s algorithm is fairly complicated, and the (unspecified) approximation factor is rather large. For the RkMeans problem, very recently, Gupta et al. [23] gave a bi-criteria -approximation violating the number of outliers, and Friggstad et al. [20] give bi-criteria algorithms that open facilities, and have an approximation factor of on Euclidean and doubling metrics, and on general metrics.

1.1 Our Results

In this paper, we present a simple and generic framework for solving clustering problems with outliers, which substantially improves over the existing approximation factors for RkMed and RkMeans.

Theorem 1.2.

For , we have an -approximation algorithm for RkMed, in running time .

Theorem 1.3.

For , we have an -approximation algorithm for RkMeans, in running time .

In fact, our framework can also be used to improve upon the best known approximation factors for two other basic problems in clustering, namely the matroid median (MatMed) and the knapsack median (KnapMed) problems. As in -median, in both problems we are given , and a metric over and the goal is to select a set so as to minimize . In MatMed, we require to be an independent set of a given matroid. In KnapMed

, we are given a vector

and a bound and we require . The previous best algorithms had factors [42] and [9] respectively.

Theorem 1.4.

For , we have an efficient -approximation algorithm for MatMed, and an -approximation for KnapMed with running time .

1.2 Our techniques

For clarity in presentation, we largely focus on RkMed in this section. Like many provable algorithms for clustering problems, our starting point is the natural LP relaxation for this problem. Howeover, unlike the vanilla -median problem, this LP has an unbounded integrality gap (see Section 3.1) for RkMed. Nevertheless, we can show that all is not lost due to this gap. Indeed, one of our main technical contributions is in developing an iterative algorithm which can round the natural LP to compute an almost-integral solution, i.e., one with at most two fractionally open facilities. By increasing the two possible fractional values to , we can obtain a solution with at most open facilities and cost bounded by times the optimal LP cost. So, the natural LP is good if we are satisfied with such a pseudo-solution111Indeed, an pseudo-approximation with open facilities is also implicitly given in [16], albeit with much a larger constant as a bound on the approximation ratio.. So the unbounded integrality gap essentially arises as the gap between almost-integral solutions and fully-integral ones. In what follows, we first highlight how we overcome this gap, and then give an overview of the rounding algorithm.

1.2.1 Overcoming Gaps Between Almost-Integral and Integral Solutions

In the following, let denote the extent to which facility is open in a fractional solution. Note that once the variables are fixed, the fractional client assignments can be done in a greedy manner until a total of clients are connected fractionally. Now, assume that we have an almost-integral solution with two strictly fractional values and . To round it to a fully-integral one, we need to increase one of to , and decrease the other to , in a direction that maintains the number of connected clients. Each of the two operations will incur a cost that may be unbounded compared to the LP cost, and they lead to two types of gap instances described in Section 3. We handle these two types separately.

The first type of instances, correspondent to the “increasing” operation, arise because the cost of the clients connected to the facility increased to can be very large compared to the LP cost. We handle this by adding the so-called “star constraints” to the natural LP relaxation, which explicitly enforces a good upper bound on the total connection to each facility. The second type of instances, correspondent to the “decreasing” operation, arise because there could be many clients very near the facility being set to , all of which now incur a large connection cost. We handle this by a preprocessing step, where we derive an upper bound on the connection distance of each client , which ensures that in any small neighborhood, the number of clients which are allowed to have large connection cost is small. The main challenge is in obtaining good bounds such that there exists a near-optimal solution respecting these bounds.

The above techniques are sufficient to give an approximation for RkMed and RkMeans. Using an additional sparsification technique, we can reduce the gap between an almost-integral solution and an integral one to an additive factor of . Thus, our final approximation ratio is essentially the factor for rounding an LP solution to an almost-integral one. The sparsification technique was used by Li and Svensson [33] and by Byrka et al. [9] for the -median and knapsack median problems respectively. The detail in how we apply the technique is different from those in [33, 9], but the essential ideas are the same. We guess balls of clients that incur a large cost in the optimum solution (“dense balls”), remove these clients, pre-open a set of facilities and focus on the residual instance. We show that the gap between almost-integral and fully-integral solutions on the residual instance (where there are no dense balls) is at most .

1.2.2 Iterative Rounding to Obtain Almost-Integral Solutions

We now highlight the main ideas behind our iterative rounding framework. At each step, we maintain a partition of into , the set of clients which need to be fully connected, and , the set of partially connected clients. Initially, and . For each client, we also have a set denoting the set of facilities may connect to, and a “radius” which is the maximum connection cost for ; these can be obtained from the initial LP solution. Since the client assignments are easy to do once the variables are fixed, we just focus on rounding the variables. In addition to , we also maintain a subset  of full clients such that a) their balls are disjoint, and b) every other full client  is “close” (within ) to the set . Finally, for each client , we maintain an inner-ball which only includes facilities in at distance at most from for some constant .

We then define the following core constraints: (i) for every , where , (ii) , and (iii) the total number of connected clients is at least . As for non-core constraints, we define for all partial clients, and for all full clients. These constraints define our polytope .

Then, in each iteration, we update to be a vertex point in that minimizes the linear function

Now, if none of the non-core constraints are tight, then this vertex point is defined by a laminar family of equalities along with a total coverage constraint, which is almost integral and so we output this. Otherwise, some non-core constraint is tight and we make the following updates and proceed to the next iteration. Indeed, if for some , we make it a full client and update accordingly. If for some , we update its to be and its to be . In each iteration of this rounding, the cost of the LP solution is non-increasing (since and are non-increasing), and at the end, we relate the total connection cost of the final solution in terms of the LP objective using the property that every full client is within from some client in .

The iterative rounding framework is versatile as we can simply customize the core constraints. For example, to handle the the matroid median and knapsack median problems, we can remove the coverage constraint and add appropriate constraints. In matroid median, we require to be in the given matroid polytope, while in knapsack median, we add the knapsack constraint. For matroid median, the polytope defined by core constraints is already integral and this leads to our -approximation. For knapsack median, the vertex solution will be almost-integral, and again by using the sparsification ideas we can get our -approximation. The whole algorithm can be easily adapted to RkMeans to get an -approximation.

1.3 Related Work

The -median and the related uncapacitated facility location (UFL) problem are two of the most classic problems studied in approximation algorithms. There is a long line of research for the two problems [35, 41, 25, 17, 28, 13, 26, 27, 37, 8, 11, 5] and almost all major techniques for approximation algorithms have been applied to the two problems (see the book of Williamson and Shmoys [43]). The input of UFL is similar to that of -median, except that we do not have an upper bound on the number of open facilities, instead each potential facility is given an opening cost . The goal is to minimize the sum of connection costs and facility opening costs. For the problem, the current best approximation ratio is 1.488 due to Li [34] and there is a hardness of 1.463 [22]. For the outlier version of uncapacitated facility, there is a 3-approximation due to [12]. This suggests that the outlier version of UFL is easier than that of -median, mainly due to the fact that constraints imposed by facility costs are soft ones, while the requirement of opening facilities is a hard one.

The -means problem in Euclidean space is the clustering problem that is used the most in practice, and the most popular algorithm for the problem is the Lloyd’s algorithm [36] (also called “the -means” algorithm). However, in general, this algorithm has no worst case guarantee and can also have super-polynomial running time. There has also been a large body of work on bridging this gap, for example, by considering variants of the Lloyd’s algorithm [4], or bounding the smoothed complexity [3], or by only focusing on instances that are “stable/clusterable” [31, 6, 38, 7, 18].

The MatMed problem was first studied by Krishnaswamy et. al. [29] as a generalization of the red-blue median problem[24], who gave an approximation to the problem. Subsequently, there has been work [14, 42] on improving the approximation factor and currently the best upper bound is an -approximation by Swamy [42]. As for KnapMed, the first constant factor approximation was given by Kumar [30], who showed a factor approximation. The approximation factor was subsequently improved by [14, 42], and the current best algorithm is a -approximation [9].

1.4 Paper Outline

We define some useful notations in Section 2. Then in Section 3, we explain our preprocessing procedure for overcoming the LP integrality gaps for RkMed/RkMeans. Then in Section 4, we give our main iterative rounding framework which obtains good almost-integral solutions. Section 5 will show how to covert an almost-integral solution to an integral one, losing only an additive factor of . Finally, in Sections 7 and 6, we present our and approximation algorithms for MatMed and KnapMed respectively.

Getting a pseudo-approximation: if one is only interested in getting a pseudo-approximation for RkMed/RkMeans, i.e, an -approximation with open facilities, then our algorithm can be greatly simplified. In particular, the preprocessing step and the conversion step in Sections 3 and 5 are not needed, and proofs in Section 4 can be greatly simplified. Such readers can directly jump to Section 4 after reading the description of the natural LP relaxation in Section 3. In Section 4 we use the term pseudo-approximation setting to denote the setting in which we only need a pseudo-approximation.

2 Preliminaries

To obtain a unified framework for both RkMed and RkMeans, we often consider an instance which could be an instance of either problem, and let a parameter denote the particular problem: for RkMed instances and for RkMeans instances. Because of the preprocessing steps, our algorithms deal with extended RkMed/RkMeans instances, denoted as . Here, and are defined as before, and is a set of pre-opened facilities that feasible solutions must contain. We assume that for since otherwise we can simply remove from ; however, we allow many clients to be collocated, and they can be collocated with one facility.

We use a pair to denote a solution to a (extended) RkMed or RkMeans instance, where is the set of open facilities and is the set of connected clients in the solution, and each is connected to its nearest facility in . Note that given , the optimum can be found easily in a greedy manner, and thus sometimes we only use to denote a solution to a (extended) RkMed/RkMeans instance. Given a set of facilities, it will be useful in the analysis to track the nearest facility for every and the corresponding distance. To this end, we define the nearest-facility-vector-pair for to be , where and for every . Though we only connect clients to , the distance from a facility to will be useful in our analysis.

We use (and its variants) to denote a vector in . For any , we define (same for the variants of ). Finally, given a subset , a point and radius , we let to denote the set of points in with distance at most from .

3 Preprocessing the RkMed/RkMeans Instance

In this section, we motivate and describe our pre-processing step for the RkMed/RkMeans problem, that is used to reduce the integrality gap of the natural LP relaxation. First we recap the LP relaxation and explain two different gap examples which, in turn illustrate two different reasons why this integrality gap arises. Subsequently, we will describe our ideas to overcome these two sources of badness.

3.1 Natural LP and Gap Examples

The natural LP for the RkMed/RkMeans problem is as follows.

()

In the correspondent integer programming, indicates whether a facility is open or not, and indicates whether a client is connected to a facility or not. The objective function to minimize is the total connection cost and all the above constraints are valid. In the LP relaxation, we only require all the variables to be non-negative.

Notice that once the values are fixed, the optimal choice of the variables can be determined by a simple greedy allocation, and so we simply state the values of the variables in the following gap examples. For simplicity, we focus on RkMed problem when talking about the gap instances.

Figure 1: Two gap instances.

For instance (a), the optimum solution has cost . An LP solution with and has cost . For instance (b), the optimum solution has cost . An LP solution with and has cost .

Gap Example 1. Instance (a) in Figure 1 contains two separate clusters in the metric: in the first cluster, there is a facility collocated with clients, and in the second cluster (very far from the first cluster), there is a facility at unit distance from clients. Suppose and . Clearly, the integral optimal solution is forced to open a center at , thereby incurring a cost of . However, note that an LP solution can open fraction of facility and fraction of facility . This can give a solution with cost , and connecting clients. Hence the integrality gap is , which is unbounded as increases.

Gap Example 2. Instance (b) contains 3 facilities and , collocated with , and clients respectively. and is far away from and . We need to open facilities and connect clients. An integral solution opens and , connect the clients at and , and connect clients at to the open facility at , incurring a total cost of . On the other hand, in an LP solution, we can set and . We fully connect the clients at and , and connect each client at to an extent of . The total number of connected clients is . Each client at will be connected to with fraction , incurring a total cost of . Thus, the integrality gap is , which is again unbounded as increases.

3.2 Preprocessing

In both the examples in Section 3.1, the unbounded integrality gap essentially arises from needing to round an almost-integral solution into a truly integral one. This is no co-incidence as we show in Section 4 that we can indeed obtain an almost-integral solution with cost at most times the optimal LP cost. Let us now examine the difficulty in rounding these last two fractional variables. As in both instances in Figure 1, suppose and are the two facilities with fractional values and let their values be and respectively, where is a sub-constant. A natural idea is to increase one of these variables to and decrease the other to . Suppose that, in order to maintain the number of connected clients, we are forced to increase to 1 and decrease to 0 (as in the two gap instances). Then two costs, corresponding to the two gap instances detailed above, will be incurred by this operation. The first cost is incurred by increasing to 1. Some clients were partially connected to in the almost-integral solution. In order to satisfy the coverage requirement, most of these clients need to be fully connected to in the integral solution. The instance in Figure 1(a) gives the example where this incurred cost is large compared to the optimal LP value. The second cost is incurred by decreasing to 0. Some clients were fully connected in the almost-integral solution, each being connected to with fraction and to some other facility, say, to extent . Then, to satisfy the coverage requirement of these clients, we need to connect them to in the integral solution, and this cost could be large (see Figure 1(b)).

Preprocessing Idea 1: Bounding Star Cost. The first kind of bad instances described above are fairly straightforward to handle. Indeed, when increasing from to , the incurred cost is at most the cost of the “star” associated with , i.e., the total connection cost of clients which are fractionally connected to . We know the cost of such a star in the optimum solution is at most and thus we can add the following constraints to overcome this type of bad instances: for every .

In fact, we can bring down this additive error to (where as ) by guessing the set of centers corresponding to the expensive stars (whose connection cost exceeds ) in the optimum solution and opening them. More importantly, we can strengthen () by enforcing the constraints bounding the connection cost to facilities : .

Preprocessing Idea 2: Bounding Backup Cost. Let us now explain how we handle the large incurred cost when we decrease to 0, as in Figure 1(b). Note that in the optimal solution, for any , the number of clients whose connection cost is at least is at most . In particular, in the example in Figure 1(b), if is indeed , then the number of clients at with connection cost at least can be at most . To this end, suppose we are able to guess a specified set of clients located at , and disallow the remaining collocated clients from connecting to facilities which are at distance or greater. Then we could set for all disallowed connections which in turn will make the LP infeasible when our guess for is for the bad instance from Figure 1

(b). While it is difficult to get such good estimates on the disallowed connections of clients on a global scale, we show that we can indeed make such restrictions

locally, which is our second pre-processing idea. We show that we can efficiently obtain a vector of upper bounds on connection costs of clients which satisfies the following two properties: (i) for any and some constant , the number of clients in any ball of radius with values more than some is at most , and (ii) there exists a feasible solution of cost at most that respects these upper bounds on client connections. This will then help us bound the re-assignment cost incurred by decreasing to (i.e., shutting down facility ).

Preprocessing Idea 3: Sparsification. To reduce the additive factor we lose for handling the second type of bad instances to any small constant , we use the so-called sparsification technique. Now, suppose we had the situation that, in the optimum solution, the number of clients in any ball of radius with connection cost at least is miraculously at most for some small . Informally, we call such instances as sparse instances. Then, we can bound total increase in cost by re-assigning these clients when shutting down one facility by . Indeed, our third pre-processing idea is precisely to correctly guess such “dense” regions (the clients and the facility they connect to in an optimum solution) so that the remaining clients are locally sparse. We note that this is similar to ideas employed in previous algorithms for -median and KnapMed.

Motivated by these ideas, we now define the notion of sparse extended instances for RkMed/RkMeans.

Definition 3.1 (Sparse Instances).

Suppose we are given an extended RkMed/RkMeans instance , and parameters and . Let be a solution to with cost at most , and be the nearest-facility-vector-pair for . Then we say is -sparse w.r.t the solution , if

  1. for every , we have ,

  2. for every , we have .

Property i requires the cost incurred by each in the solution is at most . Property ii defines the sparsity requirement: roughly speaking, for every , in the solution , the total connection cost of clients in near should be at most .

We next show in the following theorem, that we can effectively reduce a general RkMed/RkMeans instance to a sparse extended instance with only a small loss in the approximation ratio.

Theorem 3.2.

Suppose we are given a RkMed/RkMeans instance , also parameters and an upper bound on the cost of the optimal solution to (which is not given to us). Then there is an -time algorithm that outputs many extended RkMed/RkMeans instances such that one of the instances in the set, say, , has the form and satisfies:

  1. is -sparse w.r.t the solution ,

  2. .

Before we give the proof, we remark that Property ii means that the cost of the solution for the residual sparse instance plus the approximate cost of reconnecting the clients to the guessed facilities is upper bounded by .

Proof.

Let us first assume we know the optimum solution to the instance , and we will dispense with this assumption later. Let be the nearest-facility-vector-pair for . We initialize and , and construct our instance iteratively until it becomes -sparse w.r.t . Our instance is always defined as .

Indeed, to satisfy Property i, we add to the set of facilities with . There are at most many such facilities. After this, we have for every . This will always be satisfied since we shall only add facilities to .

To guarantee Property ii, we run the following iterative procedure: while there exists such that , we update and . By triangle inequality, each client removed from has . Also, . Moreover, the total over all clients is at least . Thus, the procedure will terminate in less than iterations. After this procedure, we have for every . That is, Property ii holds for the instance w.r.t solution . Thus Property i holds.

Now we prove Property ii.

The first inequality used the fact that for every , we have . Thus Property ii holds.

Since we do not know the optimum solution for , we can not directly apply the above procedure. However, note that is obtained from by removing at most balls of clients, contains at most facilities, and there are possibilities for . Thus, there are at most possible instances , and we simply output all of them. ∎

We next show how, for -sparse instances, we can effectively guess good upper bounds on the maximum connection cost of each client. The notion of sparse instances is only to get this vector of ’s and will not be needed after the proof of this theorem.

Theorem 3.3.

Let be a -sparse instance w.r.t some solution of , for some and . Let be the cost of to . Then given , we can efficiently find a vector , such that:

  1. for every and , we have

    (1)
  2. there exists a solution to of cost at most where if is connected to then ; moreover, the total cost of clients connected to any facility in this solution is at most .

Property i says that in a ball of radius , the number of clients with is at most . The first part of Property ii says that there is a near-optimal solution which respects these upper bounds on the connection distances of all clients , and the second part ensures that all the star costs for facilities in are still bounded in this near-optimal solution (akin to Property i).

Proof of Theorem 3.3.

We will now show that the following algorithm efficiently finds a vector , such that the following properties hold.

  1. for every and , we have

    (2)
  2. there exists a solution to of cost at most where if is connected to then ; moreover, the total cost of clients connected to any facility in this solution is at most .

The proof then follows by setting .

1:for every do:
2:for every , in decreasing order do
3:     for every such that  do
4:         if letting will not violate (2) for and any  then
5:                             
Algorithm 1 Construct

Consider Algorithm 1 that constructs the vector . Property i holds at the beginning of the procedure since all clients have . We show that the property is always maintained as the algorithm proceeds. To this end, focus on the operation in which we let in Step 5.

  • This will not violate (2) for and any as guaranteed by the condition.

  • It will not violate (2) for any and since on the left side of (2) we only count clients with and setting to does not affect the counting.

  • Focus on some and . Firstly, note that

    . The first inequality holds because we consider a bigger ball on the RHS, and the following equality holds since at this moment any

    has either or , which implies . Now, since (2) holds for and , we have

    Thus, (2) also holds for this and .

This finishes the proof of Property i. Now consider Property ii. Let be the nearest-facility-vector-pair for . Notice that the cost of the is at most as specified in the lemma statement. We build our desired solution incrementally using the following one-to-one mapping such that the final set of clients which will satisfy Property ii will be . Initially, let for every . To compute the final mapping , we apply the following procedure. At a high-level, if a client has connection cost , then it means that there is a nearby ball with many clients with value at least . We show that, since the instance is sparse, at least one of them is not yet mapped in and has a larger radius bound, and so we set to .

Formally, for every client , in non-decreasing order of , if , we update to be a client in such that

  1. , and

  2. .

Assuming we can find such an in every iteration, then at the end of the procedure, we have that remains one-to-one. As for connections, for every , let us connect to . Then, we have . Now it is easy to see that Property ii is satisfied. Indeed, the cost of the solution is clearly at most , and the connection satisfies . Moreover, for every facility , the total cost of clients connected to is at most .

Thus, our goal becomes to prove that in every iteration we can always find an satisfying properties (A) and (B) above. To this end, suppose we have succeeded in all previous iterations and now we are considering with . We thus need to update . Notice that at this time, we have for every , by the order we consider clients and the success of previous iterations. Now, focus on the iteration in Algorithm 1. Then the fact that means that is not assigned a radius before and during iteration . Thus, there must be some , such that , and the set such that has cardinality strictly more than . We claim that this set has a free client which can map to. Indeed, if there exists such that , then we can set . This satisfies property (A) since , and property (B) trivally, from the way is defined.

We now show there exists a , by appealing to the sparsity of the instance. Indeed, suppose for the sake of contradiction. Now, because for every at this time, we get that every client in has distance at most from . Thus,

By triangle inequality, note that , as