1 Introduction
Social choice studies the design and evaluation of voting rules (or rank aggregation rules). There have been two main perspectives: reach a compromise among subjective preferences of agents, or make an objectively correct decision. The former has been extensively studied in classical social choice in the context of political elections, while the latter is relatively less developed, even though it can be dated back to the Condorcet Jury Theorem in the 18th century [9].
In many multiagent and social choice scenarios the main consideration is to achieve the second objective, and make an objectively correct decision. Meanwhile, we also want to respect agents’ preferences and opinions, and require the voting rule to satisfy wellestablished normative properties in social choice. For example, when a group of friends vote to choose a restaurant for dinner, perhaps the most important goal is to find an objectively good restaurant, but it is also important to use a good voting rule in the social choice sense. Even for applications with less societal context, e.g. using voting rules to aggregate rankings in metasearch engines [12], recommender systems [15], crowdsourcing [23], semantic webs [27], some social choice normative properties are still desired. For example, monotonicity may be desired, which requires that raising the position of an alternative in any vote does not hurt the alternative in the outcome of the voting rule. In addition, we require voting rules to be efficiently computable.
Such scenarios propose the following new challenge: How can we design new voting rules with good statistical properties as well as social choice normative properties?
To tackle this challenge, we develop a general framework that adopts statistical decision theory [3]. Our approach couples a statistical ranking model with an explicit decision space and loss function. Given these, we can adopt Bayesian estimators as social choice mechanisms, which make decisions to minimize the expected loss w.r.t. the posterior distribution on the parameters (called the Bayesian risk). This provides a principled methodology for the design and analysis of new voting rules.
To show the viability of the framework, we focus on selecting multiple alternatives (the alternatives that can be thought of as being “tied” for the first place) under a natural extension of the  loss function for two models: let denote the Mallows model with fixed dispersion [22], and let denote the Condorcet model proposed by Condorcet in the 18th century [9, 34]. In both models the dispersion parameter, denoted , is taken as a fixed parameter. The difference is that in the Mallows model the parameter space is composed of all linear orders over alternatives, while in the Condorcet model the parameter space is composed of all possibly cyclic rankings over alternatives (irreflexive, antisymmetric, and total binary relations). is a natural model that captures realworld scenarios where the ground truth may contain cycles, or agents’ preferences are cyclic, but they have to report a linear order due to the protocol. More importantly, as we will show later, a Bayesian estimator on is superior from a computational viewpoint.
Through this approach, we obtain two voting rules as Bayesian estimators and then evaluate them with respect to various normative properties, including anonymity, neutrality, monotonicity, the majority criterion, the Condorcet criterion and consistency. Both rules satisfy anonymity, neutrality, and monotonicity, but fail the majority criterion, Condorcet criterion,^{1}^{1}1The new voting rule for fails them for all . and consistency. Admittedly, the two rules do not enjoy outstanding normative properties, but they are not bad either. We also investigate the computational complexity of the two rules. Strikingly, despite the similarity of the two models, the Bayesian estimator for can be computed in polynomial time, while computing the Bayesian estimator for is hard, which means that it is at least NPhard. Our results are summarized in Table 1.
We also compare the asymptotic outcomes of the two rules with the Kemeny rule for winners, which is a natural extension of the maximum likelihood estimator of proposed by Fishburn [14]. It turns out that when votes are generated under , all three rules select the same winner asymptotically almost surely (a.a.s.) as . When the votes are generated according to , the rule for still selects the same winner as Kemeny a.a.s.; however, for some parameters, the winner selected by the rule for
is different with nonnegligible probability. These are confirmed by experiments on synthetic datasets.


Consistency  Complexity  Min. Bayesian risk  

Kemeny  Y  Y  N 

N  

Y 

N 

Y  

Y 

N  P (Theorem 5)  Y 
Related work. Along the second perspective in social choice (to make an objectively correct decision), in addition to Condorcet’s statistical approach to social choice [9, 34], most previous work in economics, political science, and statistics focused on extending the theorem to heterogeneous, correlated, or strategic agents for two alternatives, see [25, 1] among many others. Recent work in computer science views agents’ votes as i.i.d. samples from a statistical model, and computes the MLE to estimate the parameters that maximize the likelihood [10, 11, 33, 32, 2, 29, 7]. A limitation of these approaches is that they estimate the parameters of the model, but may not directly inform the right decision to make in the multiagent context. The main approach has been to return the modal rank order implied by the estimated parameters, or the alternative with the highest, predicted marginal probability of being ranked in the top position.
There have also been some proposals to go beyond MLE in social choice. In fact, Young [34] proposed to select a winning alternative that is “most likely to be the best (i.e., topranked in the true ranking)” and provided formulas to compute it for three alternatives. This idea has been formalized and extended by Procaccia et al. [29] to choose a given number of alternatives with highest marginal probability under the Mallows model. More recently, independent to our work, Elkind and Shah [13] investigated a similar question for choosing multiple winners under the Condorcet model. We will see that these are special cases of our proposed framework in Example 3. Pivato [26] conducted a similar study to Conitzer and Sandholm [10], examining voting rules that can be interpreted as expectutility maximizers.
We are not aware of previous work that frames the problem of social choice from the viewpoint of statistical decision theory, which is our main conceptual contribution. Technically, the approach taken in this paper advocates a general paradigm of “design by statistics, evaluation by social choice and computer science”. We are not aware of a previous work following this paradigm to design and evaluate new rules. Moreover, the normative properties for the two voting rules investigated in this paper are novel, even though these rules are not really novel. Our result on the computational complexity of the first rule strengthens the NPhardness result by Procaccia et al. [29], and the complexity for the second rule (Theorem 5) was independently discovered by Elkind and Shah [13].
The statistical decisiontheoretic framework is quite general, allowing considerations such as estimators that minimize the maximum expected loss, or the maximum expected regret [3]. In a different context, focused on uncertainty about the availability of alternatives, Lu and Boutilier [20] adopt a decisiontheoretic view of the design of an optimal voting rule. Caragiannis et al. [8] studied the robustness of social choice mechanisms w.r.t. model uncertainty, and characterized a unique social choice mechanism that is consistent w.r.t. a large class of ranking models.
A number of recent papers in computational social choice take utilitarian and decisiontheoretical approaches towards social choice [28, 6, 4, 5]. Most of them evaluate the joint decision w.r.t. agents’ subjective preferences, for example the sum of agents’ subjective utilities (i.e. the social welfare). We don’t view this as fitting into the classical approach to statistical decision theory as formulated by Wald [30]
. In our framework, the joint decision is evaluated objectively w.r.t. the ground truth in the statistical model. Several papers in machine learning developed algorithms to compute MLE or Bayesian estimators for popular ranking models
[18, 19, 21], but without considering the normative properties of the estimators.2 Preliminaries
In social choice, we have a set of alternatives and a set of agents. Let denote the set of all linear orders over . For any alternative , let denote the set of linear orders over where is ranked at the top. Agent uses a linear order to represent her preferences, called her vote. The collection of agents votes is called a profile, denoted by . A (irresolute) voting rule selects a set of winners that are “tied” for the first place for every profile of votes.
For any pair of linear orders , let denote the Kendalltau distance between and , that is, the number of different pairwise comparisons in and . The Kemeny rule (a.k.a. KemenyYoung method) [17, 35] selects all linear orders with the minimum Kendalltau distance from the preference profile , that is, . The most wellknown variant of Kemeny to select winning alternatives, denoted by , is due to Fishburn [14], who defined it as a voting rule that selects all alternatives that are ranked in the top position of some winning linear orders under the Kemeny rule. That is, , where is the topranked alternative in .
Voting rules are often evaluated by the following normative properties. An irresolute rule satisfies:
anonymity, if is insensitive to permutations over agents;
neutrality, if is insensitive to permutations over alternatives;
monotonicity, if for any , , and any that is obtained from by only raising the positions of in one or multiple votes, then ;
Condorcet criterion, if for any profile where a Condorcet winner exists, it must be the unique winner. A Condorcet winner is the alternative that beats every other alternative in pairwise elections.
majority criterion, if for any profile where an alternative is ranked in the top positions for more than half of the votes, then . If satisfies Condorcet criterion then it also satisfies the majority criterion.
consistency, if for any pair of profiles with , .
For any profile , its weighted majority graph (WMG), denoted by , is a weighted directed graph whose vertices are , and there is an edge between any pair of alternatives with weight .
composing of all datasets, and a set of probability distributions over
indexed by elements of : for each , the distribution indexed by is denoted by .^{2}^{2}2This notation should not be taken to mean a conditional distribution over unless we are taking a Bayesian point of view.Given a parametric model , a maximum likelihood estimator (MLE) is a function such that for any data , is a parameter that maximizes the likelihood of the data. That is, .
In this paper we focus on parametric ranking models. Given , a parametric ranking model is composed of a parameter space and a distribution over for each , such that for any number of voters , the sample space is , where each vote is generated i.i.d. from . Hence, for any profile and any , we have . We omit the sample space because it is determined by and .
Definition
In the Mallows model [22], a parameter is composed of a linear order and a dispersion parameter with . For any profile and , , where is the normalization factor with .
Statistical decision theory [30, 3] studies scenarios where the decision maker must make a decision based on the data generated from a parametric model, generally . The quality of the decision is evaluated by a loss function , which takes the true parameter and the decision as inputs.
In this paper, we focus on the Bayesian principle of statistical decision theory to design social choice mechanisms as choice functions that minimize the Bayesian risk under a prior distribution over . More precisely, the Bayesian risk, , is the expected loss of the decision when the parameter is generated according to the posterior distribution given data . That is, . Given a parametric model , a loss function , and a prior distribution over , a (deterministic) Bayesian estimator is a decision rule that makes a deterministic decision in to minimize the Bayesian risk, that is, for any , . We focus on deterministic estimators in this work and leave randomized estimators for future research.
Example
When is discrete, an MLE of a parametric model is a Bayesian estimator of the statistical decision problem under the uniform prior distribution, where is the  loss function such that if , otherwise .
In this sense, all previous MLE approaches in social choice can be viewed as the Bayesian estimators of a statistical decisiontheoretic framework for social choice where , a  loss function, and the uniform prior.
3 Our Framework
Our framework is quite general and flexible because we can choose any parametric ranking model, any decision space, any loss function, and any prior to use the Bayesian estimators social choice mechanisms. Common choices of both and are , , and .
Definition
A statistical decisiontheoretic framework for social choice is a tuple , where is the set of alternatives, is a parametric ranking model, is the decision space, and is a loss function.
Let denote the set of all irreflexive, antisymmetric, and total binary relations over . For any , let denote the relations in where for all . It follows that , and moreover, the Kendalltau distance can be defined to count the number of pairwise disagreements between elements of .
In the rest of the paper, we focus on the following two parametric ranking models, where the dispersion is a fixed parameter.
Definition (Mallows model with fixed dispersion, and the Condorcet model)
Let denote the Mallows model with fixed dispersion, where the parameter space is and given any , is in the Mallows model, where is fixed.
In the Condorcet model, , the parameter space is . For any and any profile , we have , where is the normalization factor such that , and parameter is fixed.^{3}^{3}3In the Condorcet model the sample space is [31]. We study a variant with sample space .
and degenerate to the Condorcet model for two alternatives [9]. The Kemeny rule that selects a linear order is an MLE of for any .
We now formally define two statistical decisiontheoretic frameworks associated with and , which are the focus of the rest of our paper.
Definition
For or , any , and any , we define a loss function such that if for all , in ; otherwise .
Let and , where for any , . Let (respectively, ) denote the Bayesian estimators of (respectively, ) under the uniform prior.
We note that in the above definition takes a parameter and a decision in as inputs, which makes it different from the  loss function that takes a pair of parameters as inputs, as the one in Example 2. Hence, and are not the MLEs of their respective models, as was the case in Example 2. We focus on voting rules obtained by our framework with . Certainly our framework is not limited to this loss function.
Example
Bayesian estimators and coincide with Young [34]’s idea of selecting the alternative that is “most likely to be the best (i.e., topranked in the true ranking)”, under and respectively. This gives a theoretical justification of Young’s idea and other followups under our framework. Specifically, is similar to rule studied by Procaccia et al. [29] and was independently studied by Elkind and Shah [13].
The following lemma provides a convenient way to compute the likelihood in and from the WMG.
Lemma
In (respectively, ), for any (respectively, ) and any profile , .
Proof
For any , the number of times in is , which means that .
4 Normative Properties of Bayesian Estimators
In this section, we compare , , and the Kemeny rule (for alternatives) w.r.t. various normative properties. We will frequently use the following lemma, whose proof follows directly from Bayes’ rule. We recall that is the set of all linear orders where is ranked in the top, and is the set of binary relations in where is ranked in the top.
Lemma
In under the uniform prior, for any profile and any , if and only if .
In under the uniform prior, for any profile and any , if and only if .
Theorem
For any , satisfies anonymity, neutrality, and monotonicity. It does not satisfy majority or the Condorcet criterion for all ,^{4}^{4}4Whether satisfies majority and Condorcet criterion for is an open question. and it does not satisfy consistency.
Proof
Anonymity and neutrality are obviously satisfied.
Monotonicity. Suppose . To prove that satisfies monotonicity, it suffices to prove that for any profile obtained from by raising the position of in one vote, . We first prove the following lemma.
Lemma
For any , let denote a profile obtained from by raising the position of in one vote. For any , ; for any and any , . For any , ; for any and any , .
Proof
For , the lemma holds because , and for , the lemma holds because . The proof for and is similar.
Majority and the Condorcet criterion. Let . We construct a profile where is ranked in the top positions for more than half of the votes, which means that is the Condorcet winner, but .
For any , let denote a profile composed of copies of and copies of . It is not hard to verify that the WMG of is as in Figure 1.
Lemma
Proof
Let and let denote the profile where is removed from all rankings.
(1)  
In (1), is the number of alternatives in ranked above in . There are such combinations, for each of which there are rankings among alternatives ranked above and rankings among alternatives ranked below . Notice that there are no edges between alternatives in in the WMG, which means that for any where exactly alternatives are ranked above , the probability is proportional to by Lemma 3. Similarly, .
Since , for any , we can choose and so that . By Lemma 4, is the Condorcet winner in but it does not minimize the Bayesian risk under , which means that it is not a winner under .
Consistency. We construct an example to show that does not satisfy consistency. In our construction and are even, and . Let and denote profiles whose WMGs are as shown in Figure 2, respectively.
We provide the following lemma to compare the Bayesian risk of and . The proof is similar to the proof of Lemma 4.
Lemma
Let ,
Proof
Let or .
Similarly .
For any , for all . It is not hard to verify that . However, it is not hard to verify that , which means that is not consistent. This completes the proof of the theorem.
Theorem
For any , satisfies anonymity, neutrality, and monotonicity. It does not satisfy majority, the Condorcet criterion, or consistency.
Proof
Anonymity and neutrality are obvious. The proof for monotonicity is similar to the proof for and uses the second part of Lemma 4.
By Theorem 4 and 4, and do not satisfy as many desired normative properties as the Kemeny rule (for winners). On the other hand, they minimize Bayesian risk under and , respectively, for which Kemeny does neither. In addition, neither nor satisfy consistency, which means that they are not positional scoring rules.
5 Computational Complexity
We consider the following two types of decision problems.
Definition
In the better Bayesian decision problem for a statistical decisiontheoretic framework under a prior distribution, we are given , and a profile . We are asked whether .
We are also interested in checking whether a given alternative is the optimal decision.
Definition
In the optimal Bayesian decision problem for a statistical decisiontheoretic framework under a prior distribution, we are given and a profile . We are asked whether minimizes the Bayesian risk .
is the class of decision problems that can be computed by a P oracle machine with polynomial number of parallel calls to an NP oracle. A decision problem is hard, if for any problem , there exists a polynomialtime manyone reduction from to . It is known that hard problems are hard.
Theorem
For any , better Bayesian decision and optimal Bayesian decision for under uniform prior are hard.
Proof
The hardness of both problems is proved by a unified polynomialtime manyone reduction from the Kemeny winner problem, which was proved to be complete by Hemaspaandra et al. [16]. In a Kemeny winner instance, we are given a profile and an alternative , and we are asked if is ranked in the top of at least one that minimizes .
For any alternative , the Kemeny score of under is the smallest distance between the profile and any linear order where is ranked in the top. We prove that when , the Bayesian risk of is largely determined by the Kemeny score of :
Lemma
For any and , if the Kemeny score of is strictly smaller than the Kemeny score of , then for .
Proof
Let and denote the Kemeny scores of and , respectively. We have , which means that by Lemma 4.
We note that may be larger than . In our reduction, we will duplicate the input profile so that effectively we are computing the problems for a small . Let be any natural number such that . For any Kemeny winner instance for alternatives , we add two more alternatives and define a profile whose WMG is as shown in Figure 3 using McGarvey’s trick [24]. The WMG of contains the as a subgraph, where the weights are times of the weights of ; for all , the weight of is ; for all , the weight of is ; the weight of is and the weight of is .
Then, we let , which is copies of . It follows that for any , . By Lemma 5, if an alternative has the strictly lowest Kemeny score for profile , then it the unique alternative that minimizes the Bayesian risk for and dispersion parameter , which means that minimizes the Bayesian risk for and dispersion parameter .
Let denote the set of linear orders over that minimizes the Kendall tau distance from and let denote this minimum distance. Choose an arbitrary . Let . It follows that . If there exists where is ranked in the top position, then we let . We have . If is not a Kemeny winner in , then for any where is not ranked in the top position, . Therefore, minimizes the Bayesian risk if and only if is a Kemeny winner in , and if does not minimizes the Bayesian risk, then does. Hence better decision (checking if is better than ) and optimal Bayesian decision (checking if is the optimal alternative) are hard.
We note that the optimal Bayesian decision for the framework in Theorem 5 is equivalent to checking whether a given alternative is in . We do not know whether these problems are complete.
Theorem
For any rational number ,^{5}^{5}5We require to be rational to avoid representational issues. better Bayesian decision and optimal Bayesian decision for under uniform prior are in .
The theorem is a corollary of the following stronger theorem that provides a closedform formula for Bayesian loss for .^{6}^{6}6The formula resembles Young’s calculation for three alternatives [34], where it was not clear whether the calculation was done for . Recently it was clarified by Xia [31] that this is indeed the case. We recall that for any profile and any pair of alternatives , is the weight on in the weighted majority graph of .
Theorem
For under uniform prior, for any , .
Proof
Given a profile , for any , we let denote the number of times is preferred to in . For any , let . The theorem is equivalent to proving that . We first calculate .
For any , we have:
The comparisons of Kemeny, , and are summarized in Table 1. According to the criteria we considered, none of the three outperforms the others. Kemeny does well in normative properties, but does not minimize Bayesian risk under either or , and is hard to compute. minimizes the Bayesian risk under , but is hard to compute. We would like to highlight , which minimizes the Bayesian risk under , and more importantly, can be computed in polynomial time despite the similarity between and . This makes a practical voting rule that is also justified by Condorcet’s model.
6 Asymptotic Comparisons
In this section, we ask the following question: as the number of voters, , what is the probability that Kemeny, , and choose different winners?
We show that when the data is generated from , all three methods are equal asymptotically almost surely (a.a.s.), that is, they are equal with probability as .
Theorem
Let denote a profile of votes generated i.i.d. from given . Then, .
Proof sketch: It is not hard to see that asymptotically almost surely, for any pair of alternatives , the number of times in is . As a corollary of a stronger theorem by [7], as , is the Condorcet winner, which means that .
We now prove a lemma that will be useful for and .
Lemma
For any , any alternatives that are different from , .
Proof
We have . For any linear order where , we let denote the linear order obtained from by switching the positions of and . It follows that , which means that .
To prove the theorem for , it suffices to prove that for any and any , asymptotically almost surely, we have . For any , we let denote the linear order obtained from by exchanging the positions of and , which means that .
Lemma
.
Proof
Given , let denote the set of alternatives between and in . We have
Comments
There are no comments yet.