 # Estimating the Margin of Victory of an Election using Sampling

The margin of victory of an election is a useful measure to capture the robustness of an election outcome. It also plays a crucial role in determining the sample size of various algorithms in post election audit, polling etc. In this work, we present efficient sampling based algorithms for estimating the margin of victory of elections. More formally, we introduce the (c, ϵ, δ)--Margin of Victory problem, where given an election E on n voters, the goal is to estimate the margin of victory M(E) of E within an additive factor of c MoV(E)+ϵ n. We study the (c, ϵ, δ)--Margin of Victory problem for many commonly used voting rules including scoring rules, approval, Bucklin, maximin, and Copeland^α. We observe that even for the voting rules for which computing the margin of victory is NP-Hard, there may exist efficient sampling based algorithms, as observed in the cases of maximin and Copeland^α voting rules.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In many real life applications, there is often a need for a set of agents to agree upon a common decision although they may have different preferences over the available candidates to choose from. A natural approach used in these situations is voting. Some prominent examples of the use of voting rules in the context of multiagent systems include collaborative filtering [Pennock et al., 2000], personalized product selection [Lu and Boutilier, 2011] etc.

In a typical voting scenario, we have a set of votes each of which is a complete ranking over a set of candidates. We also have a function called voting rule that takes as input a set of votes and outputs a candidate as the winner. A set of votes over a set of candidates along with a voting rule is called an election and the winner is called the outcome of the election.

Given an election, one may like to know how robust the election outcome is with respect to the changes in votes [Shiryaev et al., 2013, Caragiannis et al., 2014, Regenwetter et al., 2006]. One way to capture robustness of an election outcome is to compute the minimum number of votes that must be changed to change the outcome. This idea of robustness is captured precisely by the notion called margin of victory. The margin of victory of an election is the smallest number of votes that need to be changed to change the election outcome. In a sense, an election outcome is considered to be robust if the margin of victory is large.

### 1.1 Motivation

In addition to being interesting purely because of theoretical reasons, the margin of victory of an election plays a crucial role in many practical applications. One such example is post election audits — methods to observe a certain number of votes (which is often selected randomly) after an election to detect an incorrect outcome. There can be a variety of reasons for an incorrect election outcome, for example, software or hardware bugs in voting machine [Norden and Law, 2007], machine output errors, use of various clip-on devices that can tamper with the memory of the voting machine [Wolchok et al., 2010], human errors in counting votes. Post election audits have nowadays become common practice to detect problems in electronic voting machines in many countries, for example, the US. As a matter of fact, at least thirty states in the US have reported such problems by 2007 [Norden and Law, 2007]. Most often, the auditing process involves manually observing some sampled votes. Researchers have subsequently proposed various risk limiting auditing methodologies that not only minimize the cost of manual checking, but also limit the risk of making a human error by sampling as few votes as possible [Stark, 2008a, Stark, 2008b, Stark, 2009, Sarwate et al., 2011]. The sample size in a risk limiting audit critically depends on the margin of victory of the election.

Another very important application where the margin of victory plays an important role is polling. In polling, the pollster samples a certain number of votes from the population and predicts the outcome of the underlying election based on the outcome of the election on the sampled votes. One of the most fundamental questions in polling is: how many votes should be sampled? It turns out that the sample complexity in polling too crucially depends on the margin of victory of the election from which the pollster is sampling [Canetti et al., 1995, Dey and Bhattacharyya, 2015]. The number of samples used in an algorithm is called the sample complexity of that algorithm. As the above discussion shows, computing the margin of victory of an election is often a necessary task in many practical applications. However, one cannot observe all the votes in many applications including the ones discussed above. For example, in a survey or polling, one cannot first observe all the votes to compute the margin of victory and then sample a few votes based on the computed margin of victory. Hence, one often needs a “good enough” estimate of the margin of victory by observing a few votes. We, in this work, precisely address this problem: estimate the margin of victory of an election by sampling as few votes as possible.

### 1.2 Our Contributions

Let be the number of votes, the number of candidates, any voting rule. We introduce and study the following computational problem in this paper111Throughout this section, we use standard terminlogy from voting theory. For formal definitions, refer to Section 2.:

###### Definition 1.

(–Margin of Victory (MoV))
Given a -election , determine , the margin of victory of with respect to , within an additive error of at most

with probability at least

. The probability is taken over the internal coin tosses of the algorithm.

We call the parameter in Definition 1 the approximation factor of the problem. The notion of approximation in Definition 1 is a hybrid of what are classically known as additive and multiplicative approximations (see [Vazirani, 2001]). However, Corollary 1 shows that, there does not exist any estimator with sample complexity independent of and achieves additive approximation. Again, we can not hope to have an estimator with sample complexity independent of that guarantees good multiplicative approximation ratio since there exist elections with margin of victory only one. This justifies the problem formulation in Definition 1.

Our goal here is to solve the –MoV problem with as few sample votes as possible. Our main technical contribution is to come up with efficient sampling based polynomial time randomized algorithms to solve the –MoV problem for various voting rules. Each sample reveals the entire preference order of the sampled vote. The specific contributions of this paper are summarized in Table 1.

Table 1 shows a practically appealing positive result- the sample complexity of all the algorithms presented here is independent of the number of voters. We also present lower bounds on the sample complexity of the –MoV problem for all the common voting rules which matches with the upper bounds when we have a constant number of candidates. Moreover, the lower and upper bounds on the sample complexity match exactly for the -approval voting rule irrespective of number of candidates, when is a constant. The specific contributions of this paper are as follows.

• We show a sample complexity lower bound of for the –MoV problem for all the commonly used voting rules, where (Theorem 2 and Corollary 1).

• We show a sample complexity upper bound of for the –MoV problem for arbitrary scoring rules (Theorem 3). However, for a special class of scoring rules, namely, the -approval voting rules, we have a sample complexity upper bound of for the –MoV problem (Theorem 4).

One key finding of our work is that, there may exist efficient sampling based polynomial time algorithms for estimating the margin of victory, even if computing the margin of victory is -Hard for a voting rule [Xia, 2012], as observed in the cases of maximin and Copeland voting rules.

### 1.3 Related Work and Discussion

Magrino et al. [Magrino et al., 2011] presents approximation algorithms to compute the margin of victory for the instant runoff voting (IRV) rule. Cary [Cary, 2011] provides algorithms to estimate the margin of victory of an IRV election. Xia [Xia, 2012] presents polynomial time algorithms for computing the margin of victory of an election for various voting rules, for example the scoring rules, and proved intractability results for several other voting rules, for example the maximin and Copeland voting rules. Endriss et al. [Endriss and Leite, 2014] computes the complexity of exact variants of the margin of victory problem for Schulze, Cup, and Copeland voting rules. However, all the existing algorithms to either compute or estimate the margin of victory need to observe all the votes, which defeats the purpose in many applications including the ones discussed in Section 1.1. We, in this work, show that we can estimate the margin of victory for many common voting rules quite accurately by sampling a few votes only. Moreover, the accuracy of our estimation algorithm is good enough for many practical scenarios. For example, Table 1 shows that it is enough to select only many votes uniformly at random to estimate of a plurality election within an error of with probability at least , where is the number of votes. We note that in all the sampling based applications discussed in Section 1.1, the sample size is inversely proportional to  [Canetti et al., 1995] and thus it is enough to estimate accurately.

The margin of victory problem is the same as the optimization version of the destructive bribery problem introduced by  [Faliszewski et al., 2006, Faliszewski et al., 2009]. However, to the best of our knowledge, there is no prior work on estimating the cost of bribery by sampling votes.

Organization. We formally introduce the terminologies in Section 2; we present the results on sampling complexity lower bounds in Section 3; we present polynomial time sampling based algorithms in Section 4; finally, we conclude in Section 5.

## 2 Preliminaries

Let be the set of all votes and the set of all candidates. If not mentioned otherwise, and denote the number of candidates and the number of voters respectively. Each vote is a complete order over the candidates in . For example, for the candidate set , means that the vote prefers to . We denote the set of all complete orders over by . Hence, denotes the set of all -voters’ preference profiles . A map is called a voting rule. Given a vote profile , we call the candidates in the set the winners. The pair is called an –election if the voting rule used is .

Examples of some common voting rules are as follows.

Positional scoring rules:

A collection of vectors

, where is a -dimensional vector with and for every , naturally defines a voting rule – a candidate gets score from a vote if it is placed at the position, and the score of a candidate is the sum of the scores it receives from all the votes. The winners are the candidates with maximum score. Scoring rules remain unchanged if we multiply every by any constant and/or add any constant . Hence, we can assume without loss of generality that in every score vector , there exists a with and for all . We call such a vector a normalized score vector.

The vector that is in the first coordinates and elsewhere gives the -approval voting rule. -approval is called the plurality voting rule. The score vector gives the Borda voting rule.

Approval: In approval voting, each vote approves a subset of candidates. The winners are the candidates which are approved by the maximum number of votes.

Bucklin: A candidate ’s Bucklin score is the minimum number such that at least half of the votes rank in their top positions. The winners are the candidates with lowest Bucklin score.

Maximin: Given an election and any two candidates and , the quantity is defined as , where is the number of votes which prefer to The maximin score of a candidate is . The winners are the candidates with maximum maximin score.

Copeland: The Copeland score of a candidate is , where . The winners are the candidates with the maximum Copeland score.

For score based voting rules (all the voting rules mentioned above are score based), we denote the score of any candidate by . Given an integer , we denote the set by . The notion of margin of victory of an election is defined as follows.

###### Definition 2.

(Margin of Victory (MoV))
Given an election with voting rule , the margin of victory of , denoted by , is the minimum number of votes that should be changed to change the winning set .

### 2.1 Chernoff Bound

We repeatedly use the following concentration inequality:

###### Theorem 1.

Let be a sequence of

independent random variables in

(not necessarily identical). Let and let . Then, for any :

 Pr[|S−μ|≥δμ]<2exp(−δ2μ/3)

## 3 Sample Complexity Lower Bounds

Our lower bounds for the sample complexity of the –MoV problem are derived from the information-theoretic lower bound for distinguishing two distributions. We start with the following basic observation. Let be a random variable taking value with probability and with probability ; be a random variable taking value with probability and with probability . Then, it is well-known that every algorithm needs at least many samples to distinguish between and with probability of making an error being at most  [Canetti et al., 1995]. Immediately, we have:

###### Theorem 2.

The sample complexity of the –MoV problem for the plurality voting rule is at least for any .

Proof: Consider two vote distributions and , each over the candidate set . In , exactly fraction of voters prefer to and thus the margin of victory is . In , exactly fraction of voters prefer to and thus the margin of victory is one. Any –MoV algorithm for the plurality voting rule gives us a distinguisher between and with probability of error at most . This is so because, if the input to is then, the output of is less than with probability at most , whereas, if the input to is then, the output of is more than with probability at most . Now, since can be arbitrarily large, we get the result.

Theorem 2 immediately gives the following corollary.

###### Corollary 1.

For any , every –MoV algorithm needs at least many samples for all voting rules which reduce to the plurality rule for two candidates. In particular, the lower bound holds for scoring rules, approval, Bucklin, maximin, and Copeland voting rules.

We note that the lower bound results in Theorem 2 and Corollary 1 do not assume anything about the sampling strategy or the computational complexity of the estimator.

## 4 Sampling Based Algorithms

A natural approach for estimating the margin of victory of an election efficiently is to compute the margin of victory of a suitably small number of sampled votes. Certainly, it is not immediate that samples chosen uniformly at random preserve the value of the margin of victory of the original election within some desired factor. Although it may be possible to formulate clever sampling strategies that tie into the margin of victory structure of the election, we will show that uniformly chosen samples are good enough to design algorithms for estimating the margin of victory for the voting rules studied here. Our proposal has the advantage that the sampling component of our algorithms are always easy to implement, and further, there is no compromise on the bounds in the sense that they are optimal for any constant number of candidates.

Because our samples are chosen uniformly at random, our analysis relies only on the fact that a sufficiently large sample of votes have been drawn. Our algorithms involve computing a quantity (which depends on the voting rule under consideration) based on the sampled votes, which we argue to be a suitable estimate of the margin of victory of the original election. This quantity is not necessarily the margin of victory of the sampled votes. For scoring rules, for instance, we will use the sampled votes to estimate candidate scores, and we use the difference between the top two candidate scores (suitably scaled) as the margin of victory estimate. We also establish a relationship between scores and values of the margin of victory to achieve the desired bounds on the estimate. The overall strategy is in a similar spirit for other voting rules as well, although the exact estimates may be different. We now turn to a more detailed description, although some proofs are omitted due to lack of space.

### 4.1 Scoring Rules and Approval Voting Rule

We begin with the class of scoring rules. Interestingly, the margin of victory of any scoring rule based election can still be estimated quite accurately by sampling only many votes. An important thing to note is that, the sample complexity upper bound is independent of the score vector. Before embarking on the proof of this general result, we prove a structural lemma which will be used crucially in the subsequent proof.

###### Lemma 1.

Let be any normalized score vector (hence, ). If and are the candidates that receive highest and second highest score respectively in a –scoring rule election instance , then,

 α1(Mα(E)−1)≤s(w)−s(z)≤2α1Mα(E)

Proof: Let be the margin of victory of . We claim that there must be at least many votes where is preferred over . Indeed, otherwise, we swap and in all the votes where is preferred over . This makes win the election. However, we have changed at most votes only. This contradicts the definition of margin of victory (see Definition 2). Let be a vote where is preferred over . Let and be the scores received by the candidates and respectively from the vote . We replace the vote by . This vote change reduces the value of by which is at least . Hence, . Each vote change reduces the value of by at most since . Hence, .

With Lemma 1 at hand, we show our estimation algorithm for the scoring rules next.

###### Theorem 3.

There is a polynomial time –MoV algorithm for the scoring rules with sample complexity .

Proof: Let be any arbitrary normalized score vector and an election instance. We sample (the value of will be chosen later) votes uniformly at random from the set of votes with replacement. For a candidate , define a random variable if gets a score of from the th sample vote. Define the estimate of , the score of . Also define . Now, using Chernoff bound (Theorem 1), we have the following.

We now use the union bound to get the following.

 Pr[∃x∈C,|¯s(x)−s(x)|>α1ε′n]≤2mexp(−ε′2ℓ3) (1)

Define the estimate of the margin of victory of the election (and thus the output of the algorithm), where and . We claim that, if , then . This can be shown as follows.

 ¯M−Mα(E) =¯s(¯w)−¯s(¯z)1.5α1−Mα(E) ≤s(w)−s(z)1.5α1+2ε′n1.5−Mα(E) ≤13Mα(E)+εn

The second inequality follows from the fact that, and . The third inequality follows from Lemma 1. Similarly, we bound as follows.

 Mα(E)−¯M =Mα(E)−¯s(w)−¯s(z)1.5α1 ≤Mα(E)−s(w)−s(z)1.5α1+2ε′n1.5 ≤13Mα(E)+εn

This proves the claim. Now, we bound the success probability of the algorithm as follows. Let be the event that .

 Pr[|¯M−Mα(E)|≤13Mα(E)+εn] ≥ Pr[|¯M−Mα(E)|≤13Mα(E)+εn∣∣∣A]Pr[A] = Pr[A] ≥ 1−2mexp(−ε′2ℓ/3)

The third equality follows from Lemma 1 and the fourth inequality follows from inequality 1. Now, by choosing , we get a –MoV algorithm for the scoring rules.

Now, we show an algorithm for the –MoV problem for the -approval voting rule which not only provides more accurate estimate of the margin of victory, but also has a lower sample complexity. The following lemmas will be used subsequently.

###### Lemma 2.

Let be an arbitrary instance of a -approval election. If and are the candidates that receive highest and second highest score respectively in , then,

 2(Mk−approval(E)−1)

Proof: We call a vote favorable if appears within the top positions and does not appear within top the positions in . We claim that the number of favorable votes must be at least . Indeed, otherwise, we swap the positions of and in all the favorable votes while keeping the other candidates fixed. This makes the score of at least as much as the score of which contradicts the fact that the margin of victory is . Now, notice that the score of must remain less than the score of even if we swap the positions of and in many favorable votes, since the margin of victory is . Each such vote change increases the score of by one and reduces the score of by one. Hence, . Again, since the margin of victory is , there exists a candidate other than and many votes in which can be modified such that becomes a winner of the modified election. Now, each vote change can reduce the score of by at most one and increase the score of by at most one. Hence, and thus since .

###### Lemma 3.

Let be a function defined by . Then,

 f(x)+f(y)≤f(x+y), for x,y>0,λx+y>2,x

Proof: For the function , we have following.

 f(x)=e−λx⇒f′′(x)=λ2x4e−λx−2λx3e−λx

Hence, for and , we have . This implies the following for an infinitesimal positive .

 f′(x) ≤ f′(y) ⇒f(x−δ)−f(x)δ ≥ f(y)−f(y−δ)δ ⇒f(x)+f(y) ≤ f(x−δ)+f(y+δ) ⇒f(x)+f(y) ≤ f(x+y)

With Lemma 2 and 3 at hand, we now describe our margin of victory estimator.

###### Theorem 4.

There is a polynomial time –MoV algorithm for the -approval rule whose sample complexity is .

Proof: Let be an arbitrary -approval election. We sample votes uniformly at random from with replacement. For a candidate , define a random variable which takes value if appears among the top candidates in the sample vote, and otherwise. Define the estimate of the score of the candidate , and let be the actual score of . Also define . Then by the Chernoff bound (Theorem 1), we have:

 Pr[|¯s(x)−s(x)|>ε′n]≤2exp(−ε′2ℓn3s(x))

Now, we apply the union bound to get the following.

 Pr[∃x∈C,|¯s(x)−s(x)|>ε′n] ≤∑x∈C2exp(−ε′2ℓn3s(x)) ≤2kexp(−ε′2ℓ/3) (2)

The second inequality follows from Lemma 3 : The expression is maximized subject to the constraints that and , when for any subset of candidates with and .

Now, to estimate the margin of victory of the given election , let and be candidates with maximum and second maximum estimated score respectively. That is, . We define the estimate of the margin of victory of the election (and thus the output of the algorithm). Let be the event that . We bound the success probability of the algorithm as follows.

 Pr[|¯M−Mk−approval(E)|≤εn] ≥ Pr[|¯M−Mk−approval(E)|≤εn∣∣A]Pr[A] = Pr[A] ≥ 1−2kexp(−ε′2ℓ/3)

The second equality follows from Lemma 2 and an argument analogous to the proof of Theorem 3. The third inequality follows from inequality 2. Now, by choosing , we get a –MoV algorithm.

Note that, the sample complexity upper bound matches with the lower bound proved in Corollary 1 for the -approval voting rule when is a constant, irrespective of the number of candidates. For the approval voting rule, we have the following result.

###### Theorem 5.

There is a polynomial time –MoV algorithm for the approval rule with sample complexity .

Proof sketch: We estimate the approval score of every candidate within an additive factor of by sampling many votes uniformly at random with replacement and the result follows from an argument analogous to the proofs of Lemma 2 and Theorem 4.

### 4.2 Bucklin Voting Rule

Now, we consider the Bucklin voting rule. Given an election , a candidate , and an integer , we denote the number of votes in in which appears within the top positions by . We prove useful bounds on the margin of victory of any Bucklin election in Lemma 4.

###### Lemma 4.

Let be an arbitrary instance of a Bucklin election and be the winner of . Let us define a quantity as follows.

 Δ(E)def=\joinrel=minℓ∈[m−1]:nℓ(w)>n/2,x∈C∖{w}:nℓ(x)≤n/2{nℓ(w)−nℓ(x)+1}

Then,

 Δ(E)2≤MBucklin(E)≤Δ(E)

Proof: Pick any and such that, and . Now by changing many votes, we can ensure that is not placed within the top positions in more than votes: choose many votes where appears within top positions and swap with candidates placed at the last position in those votes. Similarly, by changing many votes, we can ensure that is placed within top positions in more than votes. Hence, by changing at most many votes, we can make not win the election. Hence, . Now, since we have picked an arbitrary and an arbitrary candidate , we have .

For the other inequality, since the margin of victory is , there exists an , a candidate , and many votes in such that, we can change those votes in such a way that in the modified election, is not placed within top positions in more than votes and is placed within top positions in more than votes. Hence, we have the following.

 MBucklin(E)≥n′ℓ(w)−⌊n2⌋,MBucklin(E)≥⌊n2⌋+1−n′ℓ(x)
 ⇒MBucklin(E) ⇒MBucklin(E) ≥Δ(E)2

Notice that, given an election , can be computed in polynomial amount of time. Lemma 4 leads us to the following Theorem.

###### Theorem 6.

There is a polynomial time –MoV algorithm for the Bucklin rule with sample complexity .

Proof sketch: Similar to the proof of Theorem 4, we estimate, for every candidate and for every integer , the number of votes where appears within top positions within an approximation factor of . Next, we compute an estimate of from the sampled votes and output the estimate for the margin of victory as . Using Lemma 4, we can argue the rest of the proof in a way that is analogous to the proofs of Theorem 3 and 4.

### 4.3 Maximin Voting Rule

Next, we show the result for the maximin voting rule.

###### Lemma 5.

Let be any instance of a maximin election. If and are the candidates that receive highest and second highest maximin score respectively in , then,

 2Mmaximin(E)≤s(w)−s(z)≤4Mmaximin(E)

Proof: Each vote change can increase the value of by at most two and decrease the value of by at most two. Hence, we have . Let be the candidate that minimizes , that is, . Let be a vote where is preferred over . We replace the vote by the vote . This vote change reduces the score of by two and does not reduce the score of . Hence, .

###### Theorem 7.

There is a polynomial time –MoV algorithm for the maximin rule with sample complexity .

Proof sketch: Let be an instance of maximin election. Let and be any two candidates. We sample votes uniformly at random from the set of all votes with replacement.

 Xi(x,y)={1,if x≻y in the ith % sample vote−1,else

Define . By using the Chernoff bound and union bound, we have the following.

 Pr[∃x,y∈C,|¯DE(x,y)−DE(x,y)|>εn]≤2m2exp(−ε2ℓ3)

We define , the estimate of the margin of victory of , where and . Now, using Lemma 5, we can complete the rest of the proof in a way that is analogous to the proof of Theorem 3.

### 4.4 Copelandα Voting Rule

Now, we present our result for the Copeland voting rule. Xia introduced the brilliant quantity called the relative margin of victory (see Section 5.1 in [Xia, 2012]) which is a crucial ingredient in our algorithm for the Copeland voting rule. Given an election , a candidate , and an integer (may be negative also) , is defined as follows.

 s′t(V,x)= |{y∈C:y≠x,DE(y,x)<2t}| +α|{y∈C:y≠x,DE(y,x)=2t}|

For every two distinct candidates and , the relative margin of victory, denoted by , between and is defined as the minimum integer such that, . Let be the winner of the election . We define a quantity to be . Notice that, given an election , can be computed in a polynomial amount of time. Now we have the following lemma.

###### Lemma 6.

Proof: Follows from Theorem 11 in [Xia, 2012].

###### Theorem 8.

For the Copeland voting rule, there is a polynomial time –MoV algorithm whose sample complexity is .

Proof: Let be an instance of a Copeland election. For every , we compute , which is an estimate of , within an approximation factor of , where . This can be achieved with an error probability at most by sampling many votes uniformly at random with replacement (the argument is same as the proof of Theorem 3). We define . We also define between and to be the minimum integer such that, . Let be the winner of the sampled election, , the winner of , and . Since, is an approximation of within a factor of , we have the following for every candidate .

 s′t(V,x)−ε′n≤¯s′t(V,x)≤s′t(V,x)+ε′n
 RM(x,y)−2ε′n≤¯¯¯¯¯¯¯¯¯¯RM(x,y)≤RM(x,y)+2ε′n (3)

Define to be the estimate of . We show the following claim.

###### Claim 1.

With the above definitions of and , we have the following.

 Γ(E)−4ε′n≤¯Γ(E)≤Γ(E)+4ε′n

Proof: Below, we show the upper bound for .

 ¯Γ(E)=¯¯¯¯¯¯¯¯¯¯RM(¯w,¯z) ≤¯¯¯¯¯¯¯¯¯¯RM(w,¯z)+2ε′n ≤¯¯¯¯¯¯¯¯¯¯RM(w,z)+2ε′n ≤RM(w,z)+4ε′n =Γ(E)+4ε′n

The second inequality follows from the fact that is an approximation of by a factor of . The third inequality follows from the definition of , and the fourth inequality uses inequality 3. Now, we show the lower bound for .

 ¯Γ(E)=¯¯¯¯¯¯¯¯¯¯RM(¯w,¯z) ≥¯¯¯¯¯¯¯¯¯¯RM(w,¯z)−2ε′n ≥RM(w,¯z)−4ε′n ≥RM(w,z)−4ε′n =Γ(E)−4ε′n

The third inequality follows from inequality 3 and the fourth inequality follows from the definition of .

We define , the estimate of , to be . The following argument shows that is a –estimate of .

 ¯M−MCopelandα(E)