1 Introduction
Voting has received much attention by the AI community recently, mostly due to its suitability for simple and effective decision making. One popular line of research, that originates from Arrow [1], has aimed to characterize voting rules in terms of the social choice axioms they satisfy. Another approach views voting rules as estimators. It assumes that there is an objectively correct choice, a ground truth, and votes are noisy estimates of it. Then, the main criterion for evaluating a voting rule is whether it can determine the ground truth as outcome when applied to noisy votes.
A typical scenario in studies that follow the second approach employs a hypothetical noise model that uses the ground truth as input and produces random votes. Then, a voting rule is applied on profiles of such random votes and is considered effective if it acts as a maximum likelihood estimator [10, 24] or if it has low sample complexity [8]. As such evaluations are heavily dependent on the specifics of the noise model, relaxed effectiveness requirements, such as the accuracy in the limit, sought in broad classes of noise models [8] can be more informative.
We restrict our attention to approval voting, where ballots are simply sets of alternatives that are approved by the voters [17]. Furthermore, we consider multiwinner voting rules [13], which determine committees of alternatives as outcomes [14, 2]. And, in particular, we focus on approvalbased counting choice rules (or, simply, ABC rules), which were defined recently by Lackner and Skowron [15]. A famous rule in this category is known as multiwinner approval voting (AV). Each alternative gets a point every time it appears in an approval vote and the outcome consists of a fixed number of alternatives with the highest scores.
We consider noise models that are particularly tailored for approval votes and committees. These models use a committee as ground truth and produce random sets of alternatives as votes. We construct broad classes of noise models that share a particular structure, parameterized by distance metrics defined over sets of alternatives. In this way, we adapt to approvalbased multiwinner voting the approach of Caragiannis et al. [8] for voting rules over rankings.
Figure 1
illustrates our evaluation framework. The noise model is depicted at the left. It takes as input the ground truth committee and its probability distribution over approval votes is consistent to a distance metric
. Repeated executions of the noise model produce a profile of random approval votes. The ABC rule (defined using a bivariate function ; see Section 2) is then applied on this profile and returns a winning committee. Our requirement for the ABC rule is to be accurate in the limit, not only for a single noise model, but for all models that belong to a sufficiently broad class. The breadth of this class quantifies the robustness of the ABC rule to noise.The details of our framework are presented in Section 2. Our results indicate that it indeed allows for a classification of ABC rules in terms of their robustness to noise. In particular, we identify (in Section 3) the modal committee rule (MC) as the ultimately robust ABC rule: MC is robust against all kinds of reasonable noise. AV follows in terms of robustness and seems to outperform other known ABC rules (see Section 4). In contrast, the wellknown approval ChamberlinCourant (CC) rule is the least robust. On the other hand, all ABC rules are robust if we restrict noise sufficiently (see Section 5). We conclude with a discussion on open problems in Section 6.
1.1 Further related work
Approvalbased multiwinner voting rules have been studied in terms of their computational complexity [3, 22], axiomatic properties [21, 15, 2], as well as their applications [5]. In particular, axiomatic work has focused on two different principles that govern multiwinner rules: diversity and individual excellence. Lackner and Skowron [16] attempt a quantification of how close an approvalbased multiwinner voting rule is to these two principles. We remark that the primary focus of the current paper is on individual excellence.
The robustness of approval voting has been previously evaluated against noise models, using either the MLE [20] or the sample complexity [6] approach. These papers assume a ranking of the alternatives as ground truth, generate approval votes that consist of the top alternatives in rankings produced according to the noise model of Mallows [18], and assess how well approval voting recovers the ground truth ranking. We believe that our framework is fairer to approval votes, as recovering an underlying ranking when voters have very limited power to rank is very demanding. The robustness of multiwinner voting against noise has been studied by Procaccia et al. [19].
Additional references related to specific ABC rules are given in the next section. We remark that the modal committee (MC) rule is similar in spirit to the modal ranking rule considered by Caragiannis et al. [7].
2 Preliminaries
Throughout the paper, we denote by the set of alternatives. We use and denote the committee size by . The term committee refers to a set of exactly alternatives.
Approvalbased multiwinner voting.
An approval vote is simply a subset of the alternatives (of any size). An approvalbased multiwinner voting rule takes as input a profile of approval votes and returns one or more winning committees.
We particularly consider voting rules that belong to the class of approvalbased counting choice rules (or, simply, ABC rules), introduced by Lackner and Skowron [15]. Such a rule is defined by a bivariate function , with indicating a nonnegative score a committee gets from an approval vote containing alternatives, of which are common with the committee. is nondecreasing in its first argument. Formally, is defined on the set , which consists of all pairs of possible values of and , given that is sized and can be any subset of the alternatives of . I.e., is the set
The score of a committee is simply the total score it gets from all approval votes in a profile. Winning committees are those that have maximum score. We extensively use “the ABC rule ” to refer to the ABC rule that uses the bivariate function . We denote the score that an ABC rule assigns to the committee given a profile of votes by . With some abuse of notation, we use to refer to the score gets from vote . Hence, .
Wellknown ABC rules include:

Multiwinner approval voting (AV), which uses the function .

Approval ChamberlinCourant (CC), which uses the function . The rule falls within a more general context considered by Chamberlin and Courant [9].

Proportional approval voting (PAV), which uses the function .
These rules belong to the class of rules that originate from the work of Thiele [23]
. A Thiele rule uses a vector
of nonnegative weights to define . Other known Thiele rules include the Geometric rule [22] and Sainte Laguë approval voting [16].A wellknown nonThiele rule is the satisfaction approval voting (SAV) rule that uses for and otherwise [4]. Let us also introduce the modal committee (MC) rule which returns the committee that has maximum number of appearances as approval votes in the profile. MC is also nonThiele; it uses and otherwise.
Noise models.
We employ noise models to generate approval votes, assuming that the ground truth is a committee. Denoting the ground truth by , a noise model produces random approval votes according to a particular distribution that defines the probability to generate the set when the ground truth is .
Let us give the following noise model as an example. uses a parameter . Given a ground truth committee , generates a random set by selecting each alternative of with probability and each alternative in with probability . Intuitively, the probability that a set will be generated depends on its “distance” from the ground truth: the higher this distance, the smaller this probability. To make this formal, we will need the set difference distance metric defined as .
Claim 1.
For , .
So, the probability is decreasing in . We will consider general noise models with depending on , where is a distance metric defined over subsets of .
Definition 1.
Let be a distance metric over sets of alternatives. A noise model is called monotonic if for any two sets , it holds if and only if .
Definition 1 implies that when .
Besides the set difference metric used by , other wellknown distance metrics^{1}^{1}1Notice that for the four specific distance metrics defined here depends only on , , , and . In a sense, these distance metrics are alternativeindependent. Our results apply to the most general definition of distance, where can also depend on the contents of , , , and . (see Deza and Deza [11]) are:

the normalized set difference or Jaccard metric , defined as ,

the maximum difference or Zelinka metric , defined as , and

the normalized maximum difference or BunkeShearer metric , defined as .
Evaluating ABC rules against noise models.
We aim to evaluate the effectiveness of ABC rules when applied to random profiles generated by large classes of noise models. To this end, we use accuracy in the limit as a measure.
Definition 2 (accuracy in the limit).
An ABC rule is called accurate in the limit for a noise model if there exists such that, for every profile of approval votes produced by with ground truth , returns as the unique winning committee with certainty.
Then, ABC rules are evaluated in terms of robustness using the next definition.
Definition 3 (robustness).
Let be a distance metric over sets of alternatives. An ABC rule is monotone robust against (or monotone robust) if it is accurate in the limit for all monotonic noise models.
3 MC is a uniquely robust ABC rule
We begin our technical exposition by identifying the unique ABC rule that is monotone robust against all distance metrics. Our proofs, in the current and subsequent sections, make extensive use of the following lemma. The notation indicates that the random set is drawn from the noise model with ground truth .
Lemma 2.
An ABC rule is accurate in the limit for a noise model if and only if for every two different sets of alternatives with .
Proof.
Let be a natural distance, a sized set of alternatives, and with and . We will show that for . For , this is clearly true since and . For , let and be any bijection on sets of alternatives. Let . By the definition of , contains alternative but not . Also and, due to naturality of , . We conclude that . Since is a bijection (the sets of are mapped to distinct sets in ), we get , as desired. ∎
We are ready to present our first application of Lemma 2.
Theorem 3.
MC is the only ABC rule that is monotone robust against any distance metric.
Proof.
Let be a noise model that is monotonic for some distance metric . Let be any two different sized sets of alternatives. By the definition of MC, we have
By Lemma 2, we obtain that MC is monotone robust.
We will now show that MC is the only ABC rule that has this property. Let be an ABC rule that is different than MC. This means that there exist integers and with , , and . We will construct a distance metric and a monotonic noise model for which is not accurate in the limit.
Rename the alternatives of as and let , , and . Notice that, by the definition of , implies that and, equivalently, ; hence, the set is welldefined. Clearly, ; so sets and share at least one alternative.
We define a distance metric between subsets of that has if , , otherwise, and in particular and for every different than , , or .
We are ready to define the monotonic noise model . For simplicity, we use , , and for every other set different than , , or . For (to be specified shortly), we set , , and .
4 A characterization for AV
In this section, we identify the class of distance metrics against which AV is monotone robust. Before defining this class, let us fix some notation; this will be useful in several proofs.
For a distance metric and a set of alternatives , let be the number of different nonzero values the quantity can take. We denote these different distance values by , , …, . We also use . For and alternatives , we denote by the class of sets of alternatives that contain alternative but not alternative and satisfy .
Definition 4 (majorityconcentricity).
A distance metric is called majorityconcentric^{2}^{2}2Majorityconcentricity is similar in spirit with a property of distance metrics over rankings with the same name in [8]. if for every sized set of alternatives , it holds for every alternatives and and .
We are ready to prove our characterization for AV.
Theorem 4.
AV is monotone robust if and only if the distance metric is majorityconcentric.
Proof.
Let be a monotonic noise model for a majority concentric distance metric . Let and be two different sets with alternatives each. By Lemma 2, in order to show that AV is accurate in the limit for (and, consequently, monotone robust), it suffices to show that .
We will need some additional notation. For , we denote by ) the class of sets of alternatives that satisfy . For alternatives , we denote ) the subclass of consisting of sets of alternatives that include and by the subclass of consisting of sets do not contain alternative .
To simplify notation, we set . Also, we drop (e.g., we use instead of ) from notation since it is clear from context. We have
(2) 
Now, observe that the probability is the same for all sets . In the following, we use for all , for . Hence, (2) becomes
Similarly, we have
and, by linearity of expectation,
(3) 
Let be a bijection that maps each alternative of to a distinct alternative of . Then, (3) becomes
(4)  
The third equality follows since , , and and for . The first inequality follows since is majority concentric and since and, thus, all differences in (4) are nonnegative. The last inequality follows after observing that since and for and since . This completes the “if” part of the proof.
Let us now consider a nonmajority concentric distance metric that satisfies for the sized set of alternatives , some alternatives and , and some . We show the “only if” part of the theorem by constructing a noise model that satisfies for .
Again, we use for every set of alternatives , , and drop from notation. We define the model probabilities so that and . Notice that such a noise model exists for any arbitrarily small . Since there are sets of alternatives and is the probability that returns the ground truth ranking, it must be . We now apply equality (4). Observe that, since , . We obtain
Now, observe that for , it holds (the total number of sets of alternatives) and . Also, and . Setting specifically , we obtain that
which is negative for since . The proof of the “only if” part of the theorem now follows by Lemma 2. ∎
It is tempting to conjecture that AV and MC are the only ABC rules that are monotone robust against all majority concentric distance metrics. However, this is not true as the next example, which uses a different ABC rule, shows.
Example 1.
Let and . Consider the majority concentric distance metric and the ABC rule that has , , and otherwise. We will show that is monotone robust against any majority concentric distance metric . Without loss of generality, let us assume that and . Observe that the quantity is equal to when , when , when , when , and when . Hence, for the monotonic noise model , we have , where , , , and are abbreviations for the probabilities for , , , and , respectively.
In order to have for as the definition of majority concentricity requires, it must be either or . In the first case, we have . In the second case, we have . Accuracy in the limit of the ABC rule for the noise model follows by Lemma 2.
5 Robustness of other ABC rules
Our results for other ABC rules (besides MC and AV) involve two classes of distance metrics. We define the first one here.
Definition 5 (natural distance metric).
A distance metric is called natural if for every three sets , , and with such that , it holds that .
The next observation follows easily by the definitions.
Claim 5.
Any natural distance metric is majorityconcentric.
Proof.
Let be a natural distance, a sized set of alternatives, and with and . We will show that for . For , this is clearly true since and .
For , let and be any bijection on sets of alternatives. Let . By the definition of , contains alternative but not . Also and, due to naturality of , . We conclude that . Since is a bijection (the sets of are mapped to distinct sets in ), we get , as desired. ∎
The opposite is not true as the next example illustrates.
Example 2.
Let and consider the distance metric with for every pair of sets with , if and , and , otherwise. It can be easily seen that the distance is majorityconcentric; it suffices to observe that, within distance from any set, each alternative appears in exactly one set. To see that is not natural, consider , and . We have but .
Lemma 7 below identifies the class of ABC rules that are monotone robust against all natural distance metrics. The condition uses an appropriately defined bijection on sets of alternatives.
Definition 6.
Given two different sets and with , a bijection is defined as , where is such that for every alternative or , is a distinct alternative in for , and is a distinct alternative in for .
It is easy to see that a bijection has the following properties.
Claim 6.
Let with and let be a bijection. For every , it holds , , and .
Lemma 7.
An ABC rule is monotone robust against a natural distance metric if and only if for every two different sets of alternatives with there exists a bijection on sets of alternatives and a set with and .
Proof.
Let and be two different sets with alternatives each. Let , , and be the classes of sets of alternatives with , , and , respectively. Using this notation, we have
(5) 
We will now transform the third sum in the RHS of (5) to one running over the sets of like the first sum.
Comments
There are no comments yet.