Election with Bribed Voter Uncertainty: Hardness and Approximation Algorithm

11/07/2018 ∙ by Lin Chen, et al. ∙ The University of Texas at San Antonio 0

Bribery in election (or computational social choice in general) is an important problem that has received a considerable amount of attention. In the classic bribery problem, the briber (or attacker) bribes some voters in attempting to make the briber's designated candidate win an election. In this paper, we introduce a novel variant of the bribery problem, "Election with Bribed Voter Uncertainty" or BVU for short, accommodating the uncertainty that the vote of a bribed voter may or may not be counted. This uncertainty occurs either because a bribed voter may not cast its vote in fear of being caught, or because a bribed voter is indeed caught and therefore its vote is discarded. As a first step towards ultimately understanding and addressing this important problem, we show that it does not admit any multiplicative O(1)-approximation algorithm modulo standard complexity assumptions. We further show that there is an approximation algorithm that returns a solution with an additive-ϵ error in FPT time for any fixed ϵ.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In multiagent systems, election (or voting) is an important mechanism for collective decision-making. This importance has led to extensive investigations of various aspects of election. Indeed, the field of computational social choice investigates algorithmic and computational complexity aspects of this mechanism (see, e.g., the book by Brandt et al. (2016)). In this paper, we focus on two important aspects of election that have received an extensive amount of attention but are still not fully understood: uncertainty and bribery.

Uncertainty. Most studies in election investigated deterministic models and did not consider uncertainty, which is however often encountered in real-world scenarios. There are two exceptions. One exception is the investigation of uncertainty from the perspective of the possible winner. In this perspective, the input is incomplete and the problem is to determine if it is possible to extend the incomplete input to make a designated candidate win or lose. The uncertainty can be incurred by voters’ incomplete preference lists, as shown by Konczak and Lang (2005); Xia and Conitzer (2011); Betzler and Dorn (2010); Baumeister and Rothe (2012); Betzler et al. (2009). The uncertainty can also be incurred by an incomplete set of candidates (e.g., additional candidates may be added), as shown by Chevaleyre et al. (2010); Xia et al. (2011); Baumeister et al. (2011). The other exception is the investigation of uncertainty incurred by complete but probabilistic inputs. For example, Wojtas and Faliszewski (2012) introduced an election model in which voters or candidates may have some probabilities of no-show, either because the communication network is not reliable or because voters inherently behave as such.

Bribery. Faliszewski et al. (2009a) introduced the bribery problem in which a briber (or attacker) attempts to make a designated candidate win by paying a (monetary) bribe to some voters. Once bribed, a voter will vote for the candidate designated by the attacker. This problem has received a considerable amount of attention; see, e.g., Lin (2010); Brelsford et al. (2008); Xia (2012); Faliszewski et al. (2015, 2011, 2009b); Parkes and Xia (2012). Most studies in this context consider deterministic models, but researchers have started investigating the issue of uncertainty in this context as well. For example, Erdelyi et al. (2014) considered the bribery problem with uncertain voting rules; Mattei et al. (2015) considered the bribery problem with uncertain information, Erdélyi et al. (2009) considered uncertainty in the lobbying problem, which is related to, but different from, the bribery problem.

New problem: Election with Bribed Voter Uncertainty (BVU). We observe that in the context of the bribery problem, there is an inherent uncertainty that has not been considered in the literature: The vote of a bribed voter may or may not be counted, either because a bribed voter may choose not to cast its vote in fear of being caught, or because a bribed voter is indeed caught and therefore its vote is discarded. In this setting, each voter is associated with a price of being bribed as well as a probability that its vote is not counted upon taking a bribe. The goal of the attacker is to bribe a subset of voters such that the total bribing cost does not exceed a given budget, while the probability that a designated candidate wins the election is maximized.

The importance of understanding bribed voter uncertainty cannot be overestimated. This is because, even with the proliferation of anonymous and unregulated cryptocurrencies (e.g., Bitcoin) that are deemed as ideal for bribery purposes, there is still a possibility that a bribe-taking voter is detected (see Goldfeder et al. (2017)). In the United States, telling a voter whom to vote for is one type of voting fraud and may cause the votes to be discarded (seeHeritage Foundation (2018) (2018)), as attested by the case that the Wetumpka City Council District 2 election was switched after 8 ballots were ruled (by a judge) to be thrown out (seeArwood (2017) (2017)).

I-a Our Contributions

In this paper we make three main conributions. First, we introduce and initiate the study of the BVU problem, which captures a new form of uncertainty in bribery.

Second, we characterize the hardness of the BVU problem and show that the newly captured uncertainty completely changes the complexity of the bribery problem as follows. In the absence of uncertainty, the bribery problem can be solved by a simple greedy algorithm (as shown by Faliszewski et al. (2009a)). In the presence of uncertainty, assuming , there is no -approximation algorithm even if there are only two candidates; assuming , there is no -approximation algorithm that runs in FPT time parameterized by , which is the difference between the number of votes received by the winner and the number of votes received by the designated candidate in the absence of bribery.

Third, despite the strong hardness result mentioned above, we show the existence of an additive -approximation FPT algorithm when the number of candidates is a constant. This means that for an arbitrary small , there is an algorithm that runs in FPT-time (parameterized by the parameter mentioned above) and returns an approximate solution with an objective value that is at most smaller than the optimal objective value. This result relies on a reduction from the BVU problem to a new variant of the knapsack problem (involving a stochastic objective and multiple cardinality constraints) and an approximation algorithm for this new variant of the knapsack problem (while leveraging dynamic programming and a non-trivial application of Berry-Essen’s Theorem). Both the proof technique and the new variant of the knapsack problem may be of independent interest.

Ii Problem Statement and Preliminaries

The basic election model. In the basic election model, there are a set of candidates and a set of voters . Each voter has a preference over the candidates. There is a voting rule according to which a winner is selected. In this paper we focus on the plurality rule with a single winner, namely that every voter votes for its most preferred candidate and the winner is the candidate that receives the highest number of votes.

The classic bribery problem in the basic election model. A voter may be bribed to deviate from its own preference. Suppose each voter has a price . If takes a bribe of amount from the briber (or attacker), then will vote, regardless of ’s own preference, for the designated candidate of the brier (i.e., the candidate preferred by the briber). The briber has a total bribe budget . The goal of the briber is to make the designated candidate win the election. The bribery problem has been extensively investigated in the literature; see, for example, Faliszewski et al. (2009a); Lin (2010); Brelsford et al. (2008); Xia (2012); Faliszewski et al. (2015); Parkes and Xia (2012).

BVU (Election with Bribed Voter Uncertainty): A new problem. As discussed before, we introduce and study a novel variant of the classic bribery problem. Suppose voter takes a bribe of amount from the briber. With probability , which is independent of anything else, the vote of goes to the designated candidate and is counted; with probability , the vote of is not counted (for the two reasons mentioned above), that is, no candidate will receive the vote from . Without loss of generality, let be the winner when there is no bribery and be the briber’s designated candidate. Let be the subset of voters that vote for candidate in the absence of bribery, then for any . Moreover, let , namely the difference between the number of votes received by the winner and the number of votes received by the designated candidate in the absence of bribery. The BVU problem asks for a subset of voters in whose total price is bounded by such that if they are bribed, the probability that the designated candidate wins is maximized, while noting that the voters in do not need to be bribed because they already vote for . More precisely, the BVU problem is formalized as follows,

The (Plurality-)BVU Problem Input: A set of candidates , where is the winner and is the designated candidate in the absence of bribery; a set of voters with , where is the subset of voters that vote for in the absence of bribery; a positive integer ; the briber’s budget ; each is associated with a price for bribe and a probability with which the vote of the bribed voter goes to the designated candidate and is counted (i.e., is the probability that the vote of the bribed is not counted) Output: Find a set of indices such that , and the probability that the designated candidate wins is maximized by bribing voters in

Preliminaries.

Let

be a random variable taking non-negative values. The Markov’s inequality (see, for example,

Stein and Shakarchi (2009)) says the following: For any , it holds that

(1)
Theorem 1 (Berry-Essen theorem; see Berry (1941)).

Let be independent random variables with , , and . Let

Then, it holds that

(2)

where is a universal constant,

is the cumulative distribution function of

,

is the standard normal distribution

, and

The following Proposition 1 is a folklore.

Proposition 1.

Let be independent random variables taking values in (i.e., non-negative integers) such that for any integer the following hold:

Then, for any , we have:

Proof.

For any integer , we have

Similarly, we can prove that

Hence,

Proposition 1 can be re-written additively as follows.

Corollary 1.

Let be independent random variables taking values in such that for any integer , the following hold:

Then, we have:

By iteratively applying Corollary 1 to a sequence of independent random variables, we obtain the following corollary that will be used later.

Corollary 2.

Let , , be independent random variables taking values in such that for any integer and , the following holds:

Then, we have:

Iii Hardness of the BVU Problem

We show the hardness of the BVU problem for . By introducing dummy voters whose prices are higher than the briber’s budget (i.e., they cannot be bribed), the hardness result immediately applies to the case of an arbitrary .

Iii-a Hardness Result

The goal of this subsection is to prove the following.

Theorem 2 (Main hardness result).

Assuming , there does not exist an -approximation algorithm for BVU problem that runs in FPT time parameterized by , even if . Moreover, assuming , there does not exist an -approximation algorithm for the BVU problem that runs in polynomial time if is part of the input, even if .

In order to prove Theorem 2, we leverage the equivalence between the BVU problem with and the following Knapsack with Uncertainty (KU) problem.

Knapsack with Uncertainty (KU) Input: A knapsack of capacity ; a set of items, with each item associated with a size and a profit , which is an independent random variable such that and ; a positive integer . Output: Find a set of indices such that , and is maximized.

Lemma 1.

The BVU problem with is equivalent to the KU problem.

Proof of Lemma 1.

Consider the BVU problem with . Recall that is the winner in the absence of bribery, is the designated candidate, , and the problem is to bribe a set of voters so that the probability wins is maximized.

Consider the number of votes received by candidates and after the briber bribes the voters in . For a bribed voter , there are two possibilities:

  • The vote of is counted, meaning the number of votes received by candidate decreases by 1 and the number of votes received by candidate increases by 1.

  • The vote of is not counted, meaning the number of votes received by decreases by 1 but the number of votes received by remains the same.

This means that the votes received by candidate decreases to . Hence, for to win, it needs at least votes. Given that originally receives votes, at least votes from the bribed voters are counted. Let be a binary random variable indicating whether the vote of is counted, then and . The probability that at least votes of the bribed voters are counted is . That is, the BVU problem with essentially asks for an index set such that and is maximized. This is exactly the KU problem.

In order to prove Theorem 2, we also need:

Theorem 3.

Assuming , there does not exist an -approximation algorithm for the KU problem that runs in FPT time parameterized by .

Proof of Theorem 3.

We leverage the -sum problem, which is known to be -hard (see Downey and Fellows (1992)), and show a reduction from the -sum problem to the KU problem. We first review the -sum problem.

The -sum Problem Input: positive integer and an integer . Output: Decide whether or not there exists a subset of elements such that .

The rest is to show the following reduction: If there is an -approximation algorithm that solves the KU problem in time for some computable function and some constant , then this algorithm can be used to solve the -sum problem in time. This contradicts the -hardness result of the -sum problem mentioned above (Downey and Fellows (1995)).

The details of the reduction follow. Given an instance of the -sum problem with integers , we construct an instance of the KU problem as follows. Let and . Construct items in the KU problem with and for , where and . Let . We make two claims.

Claim 1.

If the -sum instance admits a solution, then there exists a solution to the KU problem with an objective value at least .

Proof.

Suppose the -sum problem admits a solution . Let be the index set of items in the solution. We observe that

Hence, there exists a solution with an objective value at least . Thus, Claim 1 holds. ∎

Claim 2.

If the -sum instance does not admit a solution, then any solution to the KU problem has an objective value at most .

Proof.

Suppose the -sum problem does not admit a solution. Note that for any solution to the KU problem, we have ; otherwise, leads to

which contradicts that is a solution. We split into two scenarios: or .

  • In the case , Claim 2 holds because

  • In the case , the fact and and implies . Since the -sum problem does not admit a solution, either or . Given that , we have . Then, Claim 2 holds because

Under Claims 1-2, we observe that an -approximation algorithm for the KU problem can be used to solve the -sum problem as follows:

  • In the case the -approximation algorithm for the KU porblem returns a feasible solution with an objective value that is , the optimal objective value is at most . Claim 1 implies that the -sum instance does not admit a feasible solution.

  • In the case the -approximation algorithm for the KU problem returns a feasible solution with an objective value that is , Claim 2 implies that the -sum instance must admit a feasible solution.

Hence, any -approximation algorithm for solving the KU problem can be used to solve the -sum problem. This completes the proof of Theorem 3. ∎

Corollary 3.

Assuming , there does not exist an -approximation algorithm for the KU problem that runs in polynomial time if is part of the input.

Now we are ready to prove Theorem 2.

Proof of Theorem 2.

Lemma 1 shows that the KU problem is equivalent to the BVU problem with two candidates. The hardness of the KU problem is established by Theorem 3 and Corollary 3. Hence Theorem 2 holds. ∎

Iv An Approximation Algorithm in FPT time

Having showed that the BVU problem is hard, now we present an approximation algorithm for solving it. The algorithm runs in FPT time for any fixed constant and any small constant . In terms of approximation ratio, our algorithm returns a value that is , where is the optimal objective value. Note that the hardness result given by Theorem 2 suggests that an additive approximation algorithm is perhaps the best algorithm we can hope for.

Iv-a Algorithmic Result

Theorem 4 (Main algorithmic result).

For an arbitrary small constant , there exists an algorithm for the BVU problem, which runs in time and returns a solution with an objective value no smaller than , where is the optimal objective value.

In order to prove Theorem 4, we need to design an approximation algorithm for the BVU problem. For this purpose, we define a new variant of the Knapsack problem.

The MKU Problem. The MKU problem deals with items that have deterministic sizes but random profits and involves a stochastic objective function, and the goal is to maximize a certain “overflow” probability under the knapsack’s volume and cardinality constraints. More specifically, the MKU problem is defined as follows:

Multi-block Knapsack with Uncertainty (MKU) Input: A knapsack of capacity ; a set of items , with each item associated with a size and a profit , which is an independent random variable such that and ; a partition of the items into a constant subsets , and a quota for each such that for some positive integer ; a positive integer such that ; a positive index . Output: Find a set of indices such that , for all and , , is maximized, where .

Note that in the preceding definition, we intentionally make the parameters of the MKU problem correspond to the parameters of the BVU problem exactly, because we intend to reduce the number of notations used in this paper (for better readability). That is, parameters , , , , , , and in the BVU problem correspond to the same parameters in the MKU problem. We will use the problem context to distinguish the meanings of these parameters. Because of this, we say an instance of the MKU problem corresponds to an instance of the BVU problem when they have the same set of parameter values.

Now we show that the BVU problem can be solved efficiently by utilizing an algorithm for the MKU problem.

Theorem 5.

Let be an arbitrary small constant. Denote by and the optimal objective value of the BVU problem and the MKU problem, respectively. A feasible solution to the BVU problem with an objective value at least can be found in time, where is the time for finding a feasible solution to the corresponding MKU problem with the objective value at least .

Proof.

Let be an arbitrary solution to the BVU problem, and . For any , we define to be a binary random variable indicating whether votes or not if it is bribed, i.e., and .

For , if is bribed (i.e., ), then there are two scenarios:

  • The bribery succeeds, meaning that the number of votes received by decreases by 1 and the number of votes received by increases by 1.

  • The bribery fails, meaning that the number of votes received by decreases by 1 but the number of votes received by remains unchanged.

Let be the total number of votes received by after bribing. Then, we have

Note that for is a deterministic value, rather than a random variable, because no matter the bribery of succeeds or not, always loses the vote of . The probability that the designated candidate becomes the winner is:

We observe that this probability is only dependent on the value of and that for . If , then we have

For any solution to the BVU problem, we define:

Then our previous arguments show that the BVU problem can be reformulated as follows:

Find such that and is maximized.

Recall that candidate is the winner in the absence of bribery and . Then, for any , we have and therefore for any feasible solution , leading to , and in particular . Hence, we can guess the value of . When we guess the value of correctly, say, , then the BVU problem is equivalent to finding some such that , and

is maximized. By definition, holds if and only if the following two conditions are simultaneously satisfied:

  • Condition 1: There exists some such that .

  • Condition 2: For any , we have .

Let us define

Then, the preceding Conditions 1 and 2 are equivalent to and , respectively.

Hence, when we guess and correctly, the BVU problem is exactly the same as the MKU problem, whereas a (near-)optimal solution to the MKU problem implies a (near-)optimal solution to the BVU problem. Since guessing and takes enumerations, we can solve the BVU problem by having oracle access to an algorithm that solves the MKU problem. Theorem 5 is proved. ∎

Proof of Theorem 4.

Theorem 5, which is stated and proven below, shows that a (approximate) solution of the BVU problem can be found in polynomial oracle-time, by utilizing a (approximation) algorithm for the MKU problem as an oracle. The remaining task is to design an approximate algorithm for solving the MKU problem, which is quite involved and suggests us to use the “divide and conquer” strategy by considering two cases separately (Theorem 6). ∎

Now we show that there is an approximate algorithm for solving the MKU problem.

Theorem 6 (algorithm for solving the MKU problem).

For any arbitrary small constant , there exists an algorithm for the MKU problem that runs in time and returns a solution with an objective value that is no smaller than , where is the optimal objective value in the MKU problem.

Proof of Theorem 4.

By putting Theorem 5 and Theorem 6 together, we obtain Theorem 4. ∎

Iv-B The Proof of Theorem 6

The main difficulty originates from the maximization of a probability involving the sum of random variables, which does not have a simple explicit expression. A natural idea is to approximate the summation of random variables with a Gaussian variable via Berry-Essen’s Theorem. However, such an approximation is not always achievable because the condition in Berry-Essen’s Theorem does not necessarily hold. Furthermore, even if Berry-Essen’s Theorem is applicable, bounding the tail probability of a Gaussian variable together with a set of other constraints required in MKP is still challenging. Figure 1 highlights the proof strategy for overcoming these difficulties.

Fig. 1: Strategy for proving Theorem 6.

Specifically, We partition the set of items into big and small ones based on their probability. Then, we differentiate the case that the optimal solution contains many big items (Case 1), which is easily coped with by using Markov’s inequality (Lemma 2), from the case that the optimal solution does not contain many big items (Case 2), whose treatment is much more complicated and proceeds as follows.

  • First, we apply Corollary 2 to decompose the MKU problem in Case 2 into a series of sub-problems, each of which is a stochastic knapsack problem with one cardinality constraint.

  • Second, for big items (Lemma 3), we round their probability to distinct probabilities. This allows us to guess the number of big items corresponding to the rounded probabilities in the optimal solution, leading to the selection of the optimal subset of big items.

  • Third, for small items (Lemma 4), there are two scenarios:

    • In the scenario where the optimal solution does not contain a large volume of small items, we present a dynamic programming algorithm (Lemma 5).

    • In the scenario where the optimal solution contains a large volume of small items, Berry-Essen’s Theorem is applicable and we can use it to transform the problem of maximizing a specific probability to the problem of approximating the summation of moments of random variables in the optimal solution. Since the moments of a random variable are deterministic, we can leverage the technique for solving the classic knapsack problem (

      Lemmas 6-8).

Definition 1 (big vs. small items).

Under the assumption that is a small constant such that is an integer, we say an item in the MKU problem is big if and is small otherwise.

Lemma 2 (the case the optimal solution containing many big items).

If , then there is a polynomial-time algorithm that returns a solution to the MKU problem such that

  • ,

  • for ,

  • , and

  • .

Proof.

We first select the big items with the smallest sizes within . Among the remaining items in each , we further select the items of the smallest size to make sure that we have selected at least items in each and exact items in – this can be achieved by a simple greedy strategy, that is, we check if there is any such that the cardinality constraint is not satisfied yet, and pick items of smallest size in to ensure the cardinality constraint. Let be the set of items that are selected as such, then obviously we have and . Since we always select the smallest items, we have . That is, the first three conditions required by Lemma 2 are satisfied.

In what follows we show the last condition, namely . Let and . By applying Markov’s inequality Eq. (1), we have

Hence, . The completes the proof of the Lemma 2. ∎

Lemma 3 (dealing with big items in the case the optimal solution not containing many big items).

If , then there is an algorithm that runs in time and returns a set of big items such that

  • ,

  • , and

  • for any .

Proof.

We round the probabilities associated to the big items as follows. Let and be the largest integer such that . Let

be the set of rounded probabilities. For each big item, we round its probability down to the nearest value in and denote it by . Note that . Let be the subset of big items such that their associated probabilities are rounded to .

For each , we guess the value of . There are at most different possibilities on these values. Once we guess correctly for each , we select the items that have the smallest size in and let denote the set of these items. Set .

Now we prove that defined above satisfies Lemma 3, whereas the lemma is proved. For this purpose, we first observe that and always consists of the items with the smallest size in , therefore we have . Then, we compare and for every . Let be an arbitrary one-to-one mapping that maps each item in to a distinct item in for every . We have

In order to show , it suffices to show that

for every with . According to the way we round probabilities, we have , hence and

For , we have and

For , and the above inequality is trivially true.

This completes the proof of Lemma 3. ∎

Lemma 4 (dealing with small items in the case the optimal solution not containing many big items).

There exists an algorithm that runs in time and returns a feasible solution such that

The proof of this lemma needs a sequence of results.

Lemma 5.

For any , there exists an algorithm that runs in time and returns a solution such that , and for every .

Proof of Lemma 5.

We design an algorithm based on dynamic programming. Let . Although we do not know the value of , we know that this value lies in . Therefore, we can guess, via enumerations, the values such that . In the following we provide an algorithm that returns such that