1 Introduction
A common and natural way to aggregate preferences of agents is through an election. In a typical election, we have a set of candidates and a set of voters, and each voter reports his ranking of the candidates in the form of a vote. A voting rule selects one candidate as the winner once all voters provide their votes. Determining the winner of an election is one of the most fundamental problems in social choice theory.
We consider elections held in an online setting where voters vote in arbitrary order, and we would like to find the winner at any point in time. A very natural scenario where this occurs is an election conducted over the Internet. For instance, websites often ask for rankings of restaurants in a city and would like to keep track of the “best” restaurant according to some fixed voting rule. Traditionally, social choice theory addresses settings where the number of candidates is much smaller than the number of voters. However, we now often have situations where both the candidate set and voter set are very large. For example, the votes may be the result of highfrequency measurements made by sensors in a network [26], and a voting rule could be used to aggregate the measurements (as argued in [7]). Also, in online participatory democracy systems, such as [wid, syn], the number of candidates can be as large as the number of voters. The naïve way to conduct an online election is to store all the vote counts in a database and recompute the winner whenever it is needed. The space complexity of this approach becomes infeasible if the number of candidates or the number of votes is too large. Can we do better? Is it possible to compress the votes into a short summary that still allows for efficient recovery of the winner?
This question can be naturally formulated in the data stream model [20, 3]. Votes are interpreted as items in a data stream, and the goal is to devise an algorithm with minimum space requirement to determine the election winner. In the simplest setting of the plurality voting rule, where each vote is simply an approval for a single candidate and the winner is the one who is approved by the most, our problem is closely related to the classic problem of finding heavy hitters [9, 12] in a stream. For other popular voting rules, such as Borda, Bucklin or Condorcet consistent voting rules, the questions become somewhat different.
Regardless of the voting rule, if the goal is to recover only the winner and the stream of votes is arbitrary, then it becomes essentially impossible to do anything better than the abovementioned naïve solution (even when the algorithm is allowed to be randomized). Although we prove this formally, the reason should be intuitively clear: the winner may be winning by a very tiny margin thereby making every vote significant to the final outcome. We therefore consider a natural relaxation of the winner determination problem, where the algorithm is allowed to output any candidate who could have been the winner, according to the voting rule under consideration, by a change of at most votes. We call such a candidate an winner; similar notions were introduced in [36, 25]. Note that if the winner wins by a margin of victory [36] of more than , there is a unique winner.
In this work, we study streaming algorithms to solve the winner determination
problem, i.e. the task of determining, with probability at least
, an winner of any given vote stream according to popular voting rules. Our algorithms are necessarily randomized.1.1 Our Contributions
We initiate the study of streaming algorithms for the –Winner Determination problem with respect to various voting rules. The results for the –Winner Determination problem, when both and are positive, are summarized in Table 1. (When or equals , we prove that the space requirements are much larger.)
We also exhibit algorithms, having space complexity nearly same as Table 1, for the more general sliding window model, introduced by Datar et. al. in [13]. In this setting, for some parameter , we want to find an winner with respect to the most recent votes in the stream, clearly a very well motivated scenario in online elections.
1.2 Related Work
1.2.1 Social Choice
To the best of our knowledge, our work is the first to systematically study the approximate winner determination problem in the data stream model. A conceptually related work is that of Conitzer and Sandholm [10] who study the communication complexity of common voting rules. They consider parties each of whom knows only their own vote but, through a communication protocol, would like to compute the winner according to a specific voting rule. Observe that a streaming algorithm for exact winner determination using bits of memory space immediately^{I}^{I}IEach party can input its vote into the stream and then communicate the memory contents of the streaming algorithm to the next party. implies a oneway communication protocol where each party transmits bits. However, it turns out that their results only imply weak lower bounds for the space complexity of streaming algorithms. Moreover, [10] does not study determination of winners. The communication complexity of voting rules was also highlighted by Caragiannis and Procaccia in [7].
In a recent work, we [15] studied the problem of determining election winners from a random sample of the vote distribution. Since we can randomly sample from a stream of votes using a small amount of extra storage, the bounds from [15] are also useful in the streaming context. In that work, the goal was to find the winner who was assumed to have a margin of victory [36] of at least , but the same arguments also work for finding winners.
1.2.2 Streaming
The field of streaming algorithms has been the subject of intense research over the past two decades in both the algorithms and database communities. The theoretical foundations for the area were laid by [20, 3]. A stream is a sequence of data items , drawn from the universe , such that on each pass
through the stream, the items are read once in that order. The frequency vector associated with the stream
is defined as being the number of times occurs as an item in the stream. In this definition, the stream is insertiononly; more generally, in the turnstile model, items can both be inserted and deleted from the stream, in which case the frequency vector maintains the cumulative count of each element in . General surveys of the area can be found in [32, 33].Algorithms for the insertiononly case were discovered before the formulation of the data streaming model. Consider the pointquery problem: for a stream of items from a universe of size and a parameter , the goal is to output, for any item
, an estimate
such that . Misra and Gries [30] gave^{II}^{II}IIThe algorithm can be viewed as a generalization of the BoyerMoore [5, 17] algorithm for . It was also rediscovered 20 years later by [14, 22]. an elegant but simple deterministic algorithm requiring only space in bit complexity. Since to find an winner for the plurality voting rule, it’s enough to solve the point query problem and output the with maximum , MisraGries automatically implies space complexity for plurality. We use sampling to improve the dependence on and prove tightness in terms of and . Our algorithms for many of the other voting rules are also based on the MisraGries algorithm. We note that in place of MisraGries, there are several other deterministic algorithms which could have been used, such as Lossy Counting [27] and Space Saving [28], but they would not change the asymptotic space complexity bounds. A thorough overview of the point query, or frequency estimation, problem can be found in [11].For the more general turnstile model, the point query problem for such streams is that of finding , for every , such that . The best result for this problem is due to Cormode and Muthukrishnan, the randomized countmin sketch [12], which has space complexity in bits. The space bound was proved to be essentially tight by Jowhari et al. in [21]. In our context, the stream is a sequence of votes; so, our problems are mostly, just by definition, insertiononly. However, the countmin sketch becomes useful in our applications (i) if voters can issue retractions of their votes, and (ii) to maintain counts of random samples drawn from streams of unknown length.
1.3 Technical Overview
Upper Bounds.
The streaming algorithms that achieve the upper bounds shown in Table 1 are obtained through applying frequency estimation algorithms, such as MisraGries or countmin sketch, appropriately on a subsampled stream. The number of samples needed to obain winners for the various voting rules was previously analyzed in [15].
Lower Bounds.
Our main technical novelty is in the proofs of the lower bounds for the winner determination problem. Usually, in the “heavy hitters” problem in the algorithms literature, the task is roughly to determine the set of items with frequency above . Since there can be such items, a space lower bound of immediately follows for . In contrast, we wish to determine only one winner, so that just bits are needed to output the result. In order to obtain stronger lower bounds that depend on , we need to resort to other techniques. Moreover, note that our lower bounds are in the insertiononly stream model, whereas previous lower bounds for frequency estimation problems are usually for the more general turnstile model.
We prove these bounds through new reductions from fundamental problems in communication complexity. To give a flavor of the reductions, let us sketch the proof for the plurality voting rule. Consider each additive term separately in the lower bound.

[topsep=0pt,leftmargin=12pt,itemsep=0pt]

: Suppose Alice has a number and Bob a number , and Bob wishes to know whether through a protocol where communication is one way from Alice to Bob. It is known [34, 29] that Alice is required to send bits to Bob. We can reduce this problem to finding a winner in a plurality election among two candidates by having Alice push approvals for candidate into the stream and Bob pushing approvals for candidate ; the lower bound follows.

when : Consider the Indexing problem over an arbitrary alphabet: Alice has a vector and Bob an index , and Bob wants to find through a oneway protocol from Alice to Bob. Ergün et al [16], extending [29]’s proof for the case of , show Alice needs to send bits. For , we reduce Indexing to winner determination for a plurality election. Let the candidate set be . Alice (given her input ) pushes votes into the stream with votes to each for all and sends over the memory content of the streaming algorithm to Bob who (given his input ) pushes another votes into the stream with votes to each for all . Note that candidate is the unique winner of this plurality election! Using [16]’s lower bound on the communication complexity of the Indexing problem yields our result.

when : Suppose Alice has a vector and Bob a vector , and Bob wants to find^{III}^{III}IIIAssume the maximum is unique. through a oneway protocol. We show by reducing from the Augmented Indexing problem [16, 29] that Alice needs to send bits to Bob. Suppose . Alice imagines her vector as being the vote count for a plurality election among candidates, streams in and runs the streaming algorithm for the problem, and passes the memory output to Bob who also streams in his vector . The maximum entry in corresponds to a candidate winning by margin at least , hence yielding the lower bound.
2 Preliminaries
2.1 Voting and Voting Rules
Let be the set of all voters and the set of all candidates. If not mentioned otherwise, , , and denote set of voters, the set of candidates, the number of voters and the number of candidates respectively. Each voter ’s vote is a complete order over the candidate set . For example, for two candidates and , means that the voter prefers to . We denote the set of all complete orders over by . Hence, denotes the set of all voters’ preference profiles . A map is called a voting rule. Given a vote profile , we call the candidates in the winners. Given an election , we can construct a weighted graph , called weighted majority graph, from . The set of vertices in is the set of candidates in . For any two candidates and , the weight on the edge is , where (respectively ) is the number of voters who prefer to (respectively to ). A candidate is called the Condorcet winner in an election if for every other candidate . A voting rule is called Condorcet consistent if it selects the Condorcet winner as the winner of the election whenever it exists. Some examples of common voting rules are:

[topsep=0pt,leftmargin=12pt,itemsep=0pt]

Positional scoring rules: A collection of dimensional vectors with and for every naturally defines a voting rule – a candidate gets score from a vote if it is placed at the position. The score of a candidate is the sum of the scores it receives from all the votes. The winners are the candidate with maximum score. The vector that is in the first coordinates and in other coordinates gives the approval voting rule. The vector that is in the last coordinates and in other coordinates is called veto voting rule. Observe that the score of a candidate in the approval (respectively veto) voting rule is the number of approvals (and respectively vetoes) that the candidate receives. approval is called the plurality voting rule, and veto is called the veto voting rule. The score vector gives the Borda rule.

Generalized plurality: In generalized plurality voting, each voter approves or disapprove one candidate. The score of a candidate is the number of approvals it receives minus number of disapprovals it receives. The candidates with highest score are the winners. We introduce this rule and consider it to be interesting particularly in an online setting where every voter either likes or dislikes an item; hence each vote is either an approval for a candidate or a disapproval for a candidate.

Approval: In approval voting, each voter approves a subset of candidates. The winners are the candidates which are approved by the maximum number of voters.

Maximin: The maximin score of a candidate is . The winners are the candidates with maximum maximin score.

Copeland: The Copeland score of a candidate is . The winners are the candidates with maximum Copeland score.

Bucklin: A candidate ’s Bucklin score is the minimum number such that more than half of the voters rank in their first positions. The winners are the candidates with lowest Bucklin score.

Plurality with runoff: The top two candidates according to plurality score are selected first. The pairwise winner of these two candidates is selected as the winner of the election. This rule is often called the runoff voting rule.
Among the above, only the maximin and Copeland rules are Condorcet consistent.
2.2 Model of Input Data
In the basic model, the input data is an insertion only stream of elements from some universe . We note that, in the context of voting in an online scenario, the natural model of input data is the insertion only streaming model over the universe of all possible votes . The basic model can be generalized to the more sophisticated sliding window model where the only active items are the last items, for some parameter . In this work, we focus on winner determination algorithms for insertion only stream of votes in both basic and sliding window models. The basic input model can also be generalized to another input model, called turnstile model, where the input data is a sequence from ; every element in the stream corresponds to either a unit increment or a unit decrement of frequency of some element from . We will use the turnstile streaming model (over some different universe) only to design efficient winner determination algorithms for the insertion only stream of votes. We note that, the algorithms for the streaming data can make only one pass over the input data. These one pass algorithms are also called streaming algorithms.
2.3 Communication Complexity
We will use lower bounds on communication complexity of certain functions to prove space complexity lower bounds for our problems. Communication complexity of a function measures the number of bits that need to be exchanged between two players to compute a function whose input is split among those two players [37]. In a more restrictive oneway communication model, the first player sends only one message to the second player and the second player outputs the result. A protocol is a method that the players follow to compute certain functions of their input. Also the protocols can be randomized; in that case, the protocol needs to output correctly with probability at least , for some parameter (the probability is taken over the random coin tosses of the protocol). The randomized oneway communication complexity of a function with error is denoted by . Classically the first player is named Alice and the second player is named Bob and we also follow the same convention here. [24] is a standard reference for communication complexity.
2.4 Chernoff Bound
We will use the following concentration inequality:
Theorem 1.
Let be a sequence of
independent random variables in
(not necessarily identical). Let and let . Then, for any :and
The first inequality is called an additive bound and the second multiplicative.
2.5 Problem Definition
The basic winner determination problem is defined as follows.
Definition 1.
(Winner Determination)
Given a voting profile over a set of candidates and a voting rule , determine the winners .
We show a strong space complexity lower bound for the Winner Determination problem for the plurality voting rule in Theorem 12. To overcome this theoretical bottleneck, we focus on determining approximate winner of an election. Below we define the notion of approximate winner which we also call winner.
Definition 2.
(winner)
Given an voter voting profile over a set of candidates and a voting rule , a candidate is called an –winner if can be made winner by changing at most votes in .
Notice that there always exist an winner in every election since a winner is also an winner. We show that finding even an winner deterministically requires large space when the number of votes is large [see Theorem 14]. However, we design space efficient randomized algorithms which outputs an winner of an election with probability at least . The problem that we study here is called Winner Determination problem and is defined as follows.
Definition 3.
(Winner Determination)
Given a voting profile over a set of candidates and a voting rule , determine an –winner with probability at least . (The probability is taken over the internal coin tosses of the algorithm.)
3 Upper Bounds
In this section, we present the algorithms for the Winner Determination problem for various voting rules. Before embarking on specific algorithms, we first prove a few supporting results that will be used crucially in our algorithms later. We begin with the following space efficient algorithm for picking an item uniformly at random from a universe of size below.
Observation 1.
There is an algorithm for choosing an item with probability that uses bits of memory and uses fair coin as its only source of randomness.
Proof.
First let us assume, for simplicity, that is a power of . We toss a fair coin many times and choose the item, say , only if the coin comes head all the times. Hence the probability that the item gets chosen is . We need space to toss the fair coin times (to keep track of the number of times we have tossed the coin so far). If is not a power of then, toss the fair coin many times and we choose the item only if the coin comes head in all the tosses conditioned on some event . The event contains exactly outcomes including the all heads outcome. ∎
We remark that Observation 1 is tight in terms of space complexity. We state the claim formally below, as it may be interesting in its own right.
Proposition 1.
Any algorithm that chooses an item from a set of size with probability , for , using a fair coin as its only source of randomness, must use bits of memory.
Proof.
The algorithm tosses the fair coin some number of times (the number of times it tosses the coin may also depend on the outcome of the previous tosses) and finally picks an item from the set. Consider a run of the algorithm where it chooses the item, say , with smallest number of coin tosses; say it tosses the coin many times in this run . This means that in any other run of the algorithm where the item is chosen, the algorithm must toss the coin at least number of times. Let the outcome of the coin tosses in be . Let be the memory content of the algorithm immediately after it tosses the coin time, for , in the run . First notice that if , then the probability with which the item is chosen is more than , which would be a contradiction. Hence, . Now we claim that all the ’s must be different. Indeed otherwise, let us assume for some . Then the algorithm chooses the item after tossing the coin (which is strictly less than ) many times when the outcome of the coin tosses are . This contradicts the assumption that the run we started with chooses the item with smallest number of coin tosses. ∎
An essential ingredient in our algorithms is calculating the approximate frequencies of all the elements in a universe in an input data stream. The following result (due to [30]) provides a space efficient algorithm for that job.
Theorem 2.
Given an insertion only stream of length over a universe of size , there is a deterministic one pass algorithm to find the frequencies of all the items in the stream within an additive approximation of using bits of memory, for every .
Proof.
The space algorithm is due to [30]. On the other hand, notice that with space , we can exactly count the frequency of every element, even in the turnstile model of stream, by simply keeping an array of length (indexed by ids of the elements from the universe) each entry of which is capable of storing integers up to . ∎
We now describe streaming algorithms for the –Winner Determination problem for various voting rules. The general idea is to sample certain number of votes uniformly at random from the stream of votes using the algorithm of Observation 1 and generate another stream of elements over some different universe. The number of votes sampled and the universe of the stream generated depend on the specific voting rule we are considering. After that, we approximately calculate the frequencies of the elements in the generated stream using Theorem 2. For simplicity, we assume that the number of votes in known in advance up to some constant factor (only to be able to apply Observation 1). We will see in Section 3.1 how to get rid of this assumption, without affecting space complexity of any of the algorithms much. We begin with the approval and veto voting rules below.
Theorem 3.
Assume that the number of votes is known to be within for some constants and in advance. Then there is a one pass algorithm for the –Winner Determination problem for the approval voting rule that uses bits of memory and for the veto voting rule that uses bits of memory.
Proof.
Let us first consider the case of the approval voting rule. We pick the current vote in the stream with probability (the value of will be decided later) independent of other votes. Suppose we sample many votes; let be the set of votes sampled. From the set of sampled votes , we generate a stream over the universe as follows. For , let the vote be . From the vote , we add candidates in the stream . We know that there is a (and thus a corresponding ) which ensures that for every candidate , with probability at least [15], where and are the scores of the candidates in the input stream of votes and in respectively. Now we count for every candidate within an additive approximation of and the result follows from Theorem 2 (notice that the length of the stream is ).
For the veto voting rule, we approximately calculate the number of vetoes that every candidate gets using the same technique as above. However, for the veto voting rule, the corresponding bound for is which implies the result. ∎
By similar techniques, we have the following algorithm for the generalized plurality rule.
Theorem 4.
Assume that the number of votes is known to be within for any constants and in advance. Then there is a one pass algorithm for the –Winner Determination problem for the generalized plurality voting rule that uses bits of memory.
Proof.
We sample many votes uniformly at random from the input stream of votes using the technique used in the proof of Theorem 3. For every candidate, we count both the number of approvals and disapprovals that it gets within an additive approximation of which is enough to get an winner. Now the space complexity follows form Theorem 2. ∎
We generalize Theorem 3 to the class of scoring rules next. We need the following result in the subsequent proof which is due to [15].
Lemma 1.
Let be an arbitrary score vector and the winner of an –election . Let be any candidate which is not a –winner. Then, .
With Lemma 1 at hand, we now present the algorithm for the scoring rules.
Theorem 5.
Assume that the number of votes is known to be within for any constants and in advance. Let be a score vector such that for every . Then there is a one pass algorithm for the –Winner Determination problem for the scoring rule that uses , which is , bits of memory.
Proof.
Let be an arbitrary score vector with for every . We define (which is in ), for every . Since scoring rules remain same even if we multiply every with any positive constant , the score vectors and correspond to same voting rule. We pick the current vote in the stream with probability (the value of will be decided later) independent of other votes. Suppose we sample many votes; let be the set of votes sampled. For , let the vote be . We pick the candidate from the vote with probability and define it to be . We compute the frequencies of the candidates in the stream within an additive factor of , where . For every candidate , let be the –score of the candidate in the input stream of votes and be times the –score of the candidate in the sampled votes . We know that there exists an (and thus a corresponding ) which ensures that, for every candidate , with probability at least [15]. Let be times the frequency of the candidate in the stream . We now prove the following claim from which the result follows immediately.
Claim 1.
Proof.
For every candidate and every , we define a random variable to be if and otherwise. Then, . We have, . Now using Chernoff bound from Theorem 1, we have the following:
The fourth inequality follows from the fact that for every candidate . Now we use the union bound to get the following.
The second inequality follows from an appropriate choice of . ∎
We estimate the frequency of every candidate in within an additive approximation ratio of and output the candidate with maximum estimated frequency as the winner of the election. The candidate is an – winner (follows from Lemma 1) with probability at least (follows from Claim 1). The space complexity of this algorithm follows from Theorem 2 (since ) and Observation 1. ∎
We present next the streaming algorithm for the approval voting rule. It is again obtained by running a frequency estimation algorithm on samples from a stream.
Theorem 6.
Assume that the number of votes is known to be within in advance, for some constants and . Then there is a one pass algorithm for the –Winner Determination problem for the approval voting rule that uses bits of memory.
Proof.
We sample many votes using the algorithm described in Observation 1 and technique described in the proof of Theorem 5. The total number of approvals in those sampled votes is at most and we estimate the number of approvals that every candidate receives within an additive approximation of . The result now follows from the upper bound on [15] and Theorem 2. ∎
Now we move on to maximin, Copeland, Bucklin, and plurality with run off voting rules. We provide two algorithms for these voting rules, which trade off between the number of candidates and the approximation factor . The algorithm in Theorem 7 below, which has better space complexity when is small compared to , simply stores all the sampled votes.
Theorem 7.
Assume that the number of votes is known to be within in advance, for some constants and . Then there is a one pass algorithm for the –Winner Determination problem for the maximin, Bucklin, and plurality with run off voting rules that use bits of memory and for the Copeland voting rule that uses bits of memory.
Proof.
Next we consider the case when is large compared to .
Theorem 8.
Assume that the number of votes is known to be within in advance, for some constants and . Then there is a one pass algorithm for the –Winner Determination problem for the maximin, Copeland, Bucklin, and plurality with runoff voting rules that uses bits of memory.
Proof.
For each voting rule mentioned in the statement, we sample many votes uniformly at random from the input stream of votes using the algorithm used in Observation 1 and the technique used in the proof of Theorem 5. From , we generate another stream of elements belonging to a different universe (which depends on the voting rule under consideration). Finally, we calculate the frequencies of the elements of , using Theorem 2, within an additive approximation of for maximin, Bucklin, and plurality with runoff voting rules and for the Copeland voting rule. The difference of approximation factor is due to [15]. We know that for maximin, Bucklin, and plurality with run off voting rules and for the Copeland voting rule [15]. This bounds on prove the result once we describe and . Below, we describe the stream and the universe for individual voting rules. Let the vote be .

[topsep=2pt,leftmargin=15pt,itemsep=0pt]

maximin, Copeland: . From the vote , we put in for every .

Bucklin: . From the vote , we put in for every .

plurality with runoff: . From the vote , we put in for every and . In the plurality with runoff voting rule, we need to estimate the plurality score of every candidate which we do by estimating the frequencies of the elements of the in . We also need to estimate for every candidate which we do by estimating the frequencies of the elements of the form .∎
3.1 Unknown stream length
Now we consider the case when the number of voters is not known beforehand. The idea is to use reservoir sampling ([35]) along with approximate counting ([31, 18]) to pick an element from the stream almost uniformly at random. The following result shows that we can do so in a space efficient manner.
Theorem 9.
(Theorem 7 of [19]) Given an insertion only stream of length ( is not known to the algorithm beforehand) over a universe of size , there is a randomized one pass algorithm that outputs, with probability at least , the element at a random position such that, for every using bits of memory, for every and .
Recall that Theorem 2 only works for insertion only streams. However, as the stream progresses, the element chosen by Theorem 9 changes; so, we cannot invoke MisraGries to do frequency estimation on a set of samples given by Theorem 9. For streams with both insertions and deletions, we have the following result which is due to countmin sketch [12].
Theorem 10.
Given a turnstile stream of length over a universe of size , there is a randomized one pass algorithm to find the frequencies of the items in the stream within an additive approximation of with probability at least using bits of memory, for every and .
Corollary 1.
Assume that the number of votes is not known beforehand. Then there is a one pass algorithm for the –Winner Determination problem for approval, veto, generalized plurality, approval, maximin, Copeland, Bucklin, and plurality with run off voting rules that uses times more space than the corresponding algorithms when is known beforehand upto a constant factor.
Proof.
We use reservoir sampling with approximate counting from Theorem 9. The resulting stream that we generate have both positive and negative updates (since in reservoir sampling, we sometimes replace an item we previously sampled). Now we approximately estimate the frequency of every item in the generated stream using Theorem 10. ∎
Again from Theorem 7 and 9, we get the following result which provides a better space upper bound than Corollary 1 when the number of candidates is large.
Corollary 2.
Assume that the number of votes is not known beforehand. Then there is a one pass algorithm for the –Winner Determination problem for the maximin, Bucklin, and plurality with run off voting rules that use bits of memory and for the Copeland voting rule that uses bits of memory.
3.2 Sliding Window Model
Suppose we want to compute an winner of the last many votes in an infinite stream of votes for various voting rules. The following result shows that there is an algorithm, with space complexity same as Theorem 9, to sample a vote from the last votes in a stream.
Theorem 11.
([6]) Given an insertion only stream over a universe of size , there is a randomized one pass algorithm that outputs, with probability at least , the element at a random position from last positions such that, for every using bits of memory, for every and .
4 Lower Bounds
In this section, we prove space complexity lower bounds for the –Winner Determination problem for various voting rules. We reduce certain communication problems to the –Winner Determination problem for proving space complexity lower bounds. Let us first introduce those communication problems with necessary results.
4.1 Communication Complexity
Definition 4.
(Augmentedindexing)
Let and be positive integers. Alice is given a string . Bob is given an integer and . Bob has to output .
The following communication complexity lower bound result is due to [16] by a simple extension of the arguments of BarYossef et al [4].
Lemma 2.
for any .
Also, we recall the multiparty version of the setdisjointness problem.
Definition 5.
(Disj)
We have sets each a subset of . We have players and player is holding the set . We are also given the promise that either for every or there exist an element such that for every and for every . The output Disj() is if for every and else.
The following communication problem is very useful for us.
Definition 6.
(Maxsum)
Alice is given a string of length over universe . Bob is given another string of length over the same universe . The strings and is such that the index that maximizes is unique. Bob has to output the index which satisfies .
We establish the following one way communication complexity lower bound for the Maxsum problem by reducing it from the Augmentedindexing problem.
Lemma 4.
, for every .
Proof.
We reduce the Augmentedindexing problem to Maxsum problem thereby proving the result. Let the inputs to Alice and Bob in the Augmentedindexing instance be and respectively. The idea is to construct a corresponding instance of the Maxsum problem that outputs if and only if . We achieve this as follows. Alice starts execution of the Maxsum protocol using the vector which is defined as follows: the binary representation of is , for every , and is . Bob participates in the Maxsum protocol with the vector which is defined as follows. Let us define . We define , for every . The binary representation of is . Let us define an integer whose binary representation is . We define to be . First notice that the output of the Maxsum instance is either or , by the construction of . Now observe that if then, and thus the output of the Maxsum instance should be . On the other hand, if then, and thus the output of the Maxsum instance should be . ∎
Finally, we also consider the Greaterthan problem.
Definition 7.
(Greaterthan)
Alice is given an integer and Bob is given an integer . Bob has to output if and otherwise.
The following result is due to [34, 29]. We provide a simple proof of it that seems to be missing^{IV}^{IV}IVA similar proof appears in [23] but theirs gives a weaker lower bound. in the literature.
Lemma 5.
, for every .
Proof.
We reduce the Augmentedindexing problem to the Greaterthan problem thereby proving the result. Alice runs the Greaterthan protocol with its input number whose representation in binary is . Bob participates in the Greaterthan protocol with its input number whose representation in binary is . Now if and only if ∎
4.2 Reductions
4.2.1 The cases and
We begin with the problem where we have to find the winner (i.e., 0winner) for a plurality election. Notice that, we can find the winner by exactly computing the plurality score of every candidate. This requires bits of memory. We prove below that, when is much larger than , this space complexity is almost optimal even if we are allowed to use randomization, by reducing it from the Maxsum problem. This strengthens a similar result proved in Karp et al. [22] only for deterministic algorithms.
Theorem 12.
Any one pass –Winner Determination algorithm for the plurality and generalized plurality election must use bits of memory, for any .
Proof.
We prove the result for –Winner Determination problem for the plurality election. This gives the result for the generalized plurality election since every plurality election is also a generalized plurality election. Consider the Maxsum problem where Alice is given a string and Bob is given another string . The candidate set of our election is . The votes would be such that the only winner will be the candidate such that . Moreover, the winner would be known to Bob, thereby proving the result. Thus Bob can output correctly whenever our –Winner Determination algorithm outputs correctly. Alice generates many plurality votes for the candidate , for every . Alice now sends the memory content to Bob. Bob resumes the run of the algorithm by generating many plurality votes for the candidate , for every . The plurality score of candidate is and thus the plurality winner will be a candidate such that . Notice that the total number of votes is at most . The result now follows from Lemma 4. ∎
For the case when and are comparable, the following result is stronger. We prove this by exhibiting a reduction from the Disj problem.
Theorem 13.
Any one pass –Winner Determination algorithm for the plurality and generalized plurality election must use bits of memory, for any .
Proof.
Suppose we have a one pass –Winner Determination algorithm for the plurality election that uses bits of memory. We will demonstrate a oneway three party protocol to compute Disj
Comments
There are no comments yet.