Objective Social Choice: Using Auxiliary Information to Improve Voting Outcomes

01/27/2020 ∙ by Silviu Pitis, et al. ∙ UNIVERSITY OF TORONTO 0

How should one combine noisy information from diverse sources to make an inference about an objective ground truth? This frequently recurring, normative question lies at the core of statistics, machine learning, policy-making, and everyday life. It has been called "combining forecasts", "meta-analysis", "ensembling", and the "MLE approach to voting", among other names. Past studies typically assume that noisy votes are identically and independently distributed (i.i.d.), but this assumption is often unrealistic. Instead, we assume that votes are independent but not necessarily identically distributed and that our ensembling algorithm has access to certain auxiliary information related to the underlying model governing the noise in each vote. In our present work, we: (1) define our problem and argue that it reflects common and socially relevant real world scenarios, (2) propose a multi-arm bandit noise model and count-based auxiliary information set, (3) derive maximum likelihood aggregation rules for ranked and cardinal votes under our noise model, (4) propose, alternatively, to learn an aggregation rule using an order-invariant neural network, and (5) empirically compare our rules to common voting rules and naive experience-weighted modifications. We find that our rules successfully use auxiliary information to outperform the naive baselines.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Many collective decision making processes aggregate noisy good faith opinions in order to make an inference about some underlying ground truth. In cooperative policy making, for example, each party advocates for the policy they believe is objectively best. Similarly, in academic peer review, a meta-reviewer combines good faith reviewer opinions about a submitted paper. Other examples are easy to come by. We refer to this setting as objective social choice, to contrast it with the typical subjective social choice setting Procaccia and Rosenschein (2006), where the optimal choice is defined in terms of the voter utilities rather than a ground truth. Whereas subjective social choice can be viewed as collective compromise, objective social choice can be viewed as

collective estimation

.

Unlike the subjective setting, where it is natural to consider each source or voter equally—an axiom known as “anonymity” May (1952)—objective analysis suggests otherwise: diverse and more informed opinions should be valued more. Many sensible, real-world settings involve asymmetric (non-anonymous) voting, making this a relevant line of analysis. Academic review is one. Another is corporate governance, where different stakeholder classes have varying voting powers, depending on the issue. In such cases, varying voter weights are natural, and one can evaluate the quality of social choices via other avenues (e.g., direct evaluation Kang et al. (2018) or ex post analysis Gompers et al. (2003)). In other settings, such as national elections, the objective approach raises ethical concerns of fairness, and the objective approach may be inappropriate.

Although objective social choice has been the subject of numerous studies in social choice Condorcet (1785); Young (1988); Conitzer and Sandholm (2005); Caragiannis et al. (2016), forecasting Bates and Granger (1969); Dickinson (1973); Clemen (1989), statistics Fleiss (1993); Genest et al. (1986) and machine learning Dietterich (2000); Rokach (2010) (Section 2), to our knowledge, no prior work has dealt with the case of non-i.i.d. ordinal feedback (i.e., ranked preferences). Yet this is the case in many practical applications. During peer review, for instance, two of three reviewers might share primary areas of expertise, but being human, cannot share comparable cardinal estimates. Or consider a robot that must aggregate feedback from human principals. Once again, the different principals will draw upon diverse background to form their opinions, which can only be shared as ordinal preferences. In each case, how should the non-i.i.d. feedback be aggregated?

Our work is intended as a first step toward answering this question. To narrow the scope of our inquiry, we make several modeling assumptions (Section 3), which we hope can be relaxed in future work. In particular, we assume that (1) the underlying ground truth and noise generating process is modeled as a -armed bandit problem, where the different arms represent different alternatives, (2) different voters see different pulls of the arms, and (3) the social decision rule sees how many pulls each voter saw (but not their outcomes). We solve for the maximum likelihood social choice in a series of cases (Section 4). As our derived rules rest on strong assumptions about the noise generation process, we also propose to learn a more flexible aggregation rule using an order-invariant neural network (Section 4.2). We empirically compare our derived and learned rules to classical voting rules (Section 5). Our results confirm the intuition that objective estimation can be improved by up-weighing opinions from diverse and more informed sources.

2. Related Work

Social Choice

The fundamental question of social choice asks: how should we combine the preferences of many into a social preference? Arrow et al. (2010); Sen (2018). While the usual approach evaluates the social preference in terms of the preferences of individuals Arrow (2012); Harsanyi (1955); Procaccia and Rosenschein (2006) (“subjective” social choice), a line of papers frame individual preferences as noisy reflections of an underlying ground truth and evaluate the social preference by comparing to the ground truth (“objective” social choice). Perhaps the first is due to Condorcet (1785)

, who studied the problem where voters rank two alternatives correctly with some probability

. This simple noise model is a special case of the -alternative Mallows model Mallows (1957), according to which each voter ranks each pair of alternatives correctly with probability (and votes are redrawn if a cycle forms). Young (1988) generalized Condorcet’s analysis to general Mallows noise ( alternatives), and showed that the Kemeny voting rule returns the maximum likelihood (MLE) estimate of the truth for this noise model. Conitzer and Sandholm (2005) further extended this “maximum likelihood” analysis to other voting rules and i.i.d. noise models; their main results include a proof that any so-called “scoring rule” (e.g., plurality, Borda count, veto) is the MLE estimator for some i.i.d. noise model, as well as proofs that certain other voting rules (e.g., Copeland) are not MLE estimators for any i.i.d. noise model. Caragiannis et al. Caragiannis et al. (2016); Caragiannis and Micha (2017) consider the sample complexity necessary to ensure high likelihood reconstructions of the ground truth under Mallow’s noise. Also related is the independent conversations model in social networks, where independent pairs of voters receive information about some ground truth. Conitzer (2013) introduces the model for two alternatives and constructs the maximum likelihood estimator, which he shows to be #P-hard. Procaccia et al. (2015) extend and analyze this model for multiple alternatives.

Our work is unique in two respects. First, we do not assume i.i.d. noise. Rather, we use a -armed bandit noise model, which provides a basic but plausible noise generating process that can account for diversity in the subjective experiences of voters. Second, our approach is cardinal: rather than apply noise directly to ranked preferences, we apply noise to a cardinal ground truth. As Procaccia and Rosenschein (2006) introduced cardinal analysis into subjective social choice, we do so in the objective case.

Forecasting, Statistics and Machine Learning

Numerous papers in forecasting and statistics have examined the combination of estimates. Bates and Granger (1969) provided an early derivation of optimal weights for linearly combining two cardinal estimates. Their analysis was extended to the estimate case by Dickinson Dickinson (1973, 1975) and improved by Granger and Ramanathan (1984), among many others Clemen (1989); Granger (1989); Wallis (2011)

. While the literature on combining forecasts typically deals with point estimates (or time series thereof), significant work has also been done on combining probability distributions

McConway (1981); Genest and McConway (1990); Genest et al. (1986); Jacobs (1995). In empirical statistics, the combination of experimental results is known as meta-analysis Fleiss (1993). Almost all work on combining cardinal estimates considers linear combinations; this can be justified by an appeal to Harsanyi’s theorem Harsanyi (1955); Weymark (1991), which states (roughly) that any cardinal—in the VNM expected utility sense Von Neumann and Morgenstern (1953)—combination of cardinal estimates that satisfies Pareto indifference (i.e., the combination is a function of the estimates and nothing else) can be expressed as a linear combination of the estimates.

Combining estimators through ensembles is a common technique used to improve inference performance in machine learning Dietterich (2000); Rokach (2010). Much like the literature on combining forecasts, Perrone and Cooper (1992) and Tresp and Taniguchi (1995)

propose weighting schemes that ensemble estimators based on their variances.

Our proposal differs from the above works in that (1) we combine ordinal votes rather than cardinal predictions, and (2) we define an underlying noise model and use count-based information rather than empirical variances. Some recent works in reinforcement learning use ensembles in an ordinal setting

Chen et al. (2017); Christiano et al. (2017), but these works use naive ensembling techniques (majority vote and arithmetic mean).

The dueling bandit problem setting is similar to ours, in that ordinal comparisons are used to make an inference about an underlying, (potentially) cardinal bandit Yue et al. (2012). As it uses repeat online comparisons rather than count (or other similarity) information, the dueling bandits formulation is more suitable to interactive and online applications such as ad placement and recommender systems than one-shot votes. Our work could potentially be applied to initialize an online bandit when historical information is available.

3. Model

We first present a generic framework for objective social choice and then describe the modeling assumptions we make for our work.

3.1. Formal Setup

We assume the existence of a ground truth, cardinal objective function , where is a finite set of alternatives, and define . We represent by the vector , where is the “true quality” of alternative , and denote the optimal alternative by . voters partially observe this ground truth and provide our social choice rule with their noisy votes. Each such set of noisy votes is an element of the voting or observation space , which can be seen as (part of) the input domain of . In general, there are many ways in which voters could make their observations and provide their feedback. Regardless of the precise details, it seems plain that a rule with access to the votes, but to no other information () should satisfy anonymity i.e., weigh each vote equally. It is also plain that for an anonymous voting rule, whether or not votes are i.i.d. is irrelevant. Therefore, our setting is only interesting when, in addition to votes, our voting rule has access to some auxiliary information or context , so that . As is the case for , there are many options one could consider for , and we make specific assumptions below.

The codomain of may either be (1) , the set of valid ground truth functions (with ), so that outputs cardinal prediction , (2) , so that outputs a single best alternative , or (3) the set of ordinal rankings over the alternatives. Note that if the codomain is

, one can consider this entire process as a sort of autoencoder: there is a noise model

that produces the votes and auxiliary information, and the job of our rule is (roughly speaking) to reconstruct the input to . Thus, optimal rules are closely tied to noise models; cf. Conitzer and Sandholm (2005). Figure 1 summarizes the objective social choice framework.

Figure 1. A generic framework for objective social choice. The ground truth passes through noise model to generate the votes and contexts for voters. The rule is applied to generate social choice .

In addition to specifying the noise model , voting format , auxiliary information , and codomain of , we must also specify an objective function: what makes a given rule or rule selection algorithm “good”? As usual, the answer will depend on the context. In our present work we seek the rule that corresponds to a maximum likelihood estimate (MLE) of the ground truth ; that is, given data , the output of is consistent with the that is most likely to have generated votes conditioned on the context . It should be noted that where returns a best alternative or ranking over alternatives, this problem formulation is different from finding the most likely best alternative or most likely ranking over alternatives, as done under ordinal (Mallows) noise Young (1988); Conitzer and Sandholm (2005)—in our setting, the maximum likelihood alternative and ranking depends on a distributional estimate of . The MLE rule may not be the empirically best rule, and so to compare voting rules in Subsection 5, we will use the notion of regret, defined as , where is the alternative most preferred by .

3.2. Specific Modeling Assumptions

To narrow the scope of our present inquiry, we make the following assumptions about and :

Assumption 1.

The voters observe the -dimensional ground truth through an -arm stochastic bandit Lattimore and Szepesvári (2018). Each arm reveals information about the corresponding dimension of , and voters observe samples from arm according to , where is the variance of arm . To simplify analysis, we assume that the are either known or equal.

Assumption 2.

There are voters, where the -th voter sees the -th arm pulled times. Let . Each voter sees different (independently sampled) pulls—thus, vote noise is independent, but not identically distributed.

Assumption 3.

The auxiliary information consists of the observation counts for each voter. For each voter , this is number of pulls for each arm: .

Assumption 4.

Voter estimates as , which determines their vote (specific details below).

The above assumptions leave open the voting format , and also the output (codomain) of our voting rule . We explore different combinations of these in Section 4 below.

Although there are many alternatives to Assumptions 1-4, these basic assumptions strike us as a simple, yet flexible model. Many noise processes can be framed as bandits. Take peer review for instance: one could designate an arm for each paper under review (and accept the top ). Similarly, count information, which serves as a proxy for voter experience, provides a generic way of characterizing the “non-i.i.d.ness” of votes (an extension to our work might examine the case where some voters observe the same pulls, leading to dependent votes). Other interesting choices may include voter similarities, as specified by some kernel function (this would model the votes as a sample from a Gaussian process), or empirical covariance measurements (obtained by observing several votes). The assumption of Gaussian noise is relaxed in Subsection 4.2 and our experiments.

4. Aggregation Rules

4.1. Derived Rules

In this section we analyze MLE social choice under the specific modeling assumptions made above. We do this in a series of five cases of roughly increasing complexity where, in each case, we derive one or more scoring rules Conitzer and Sandholm (2005). Scoring rules, such as the Borda, plurality and veto rules Brandt et al. (2016), compute for each alternative a single aggregate score (or predicted utility, ) by taking a simple sum across individual voter weights (i.e., , where the is the weight of voter ’s vote for arm ). The alternatives are ranked according to these numbers and the top scoring alternative is selected. For example, the commonly used plurality rule assigns weight to voter ’s top choice , and for , which results in selecting the alternative that is ranked first most often. In the two alternative cases below (cases 2 and 3), where the derived weights do not have a subscript, voter ’s top choice gets weight and their second choice gets weight (or , since only relative weight matters).

The first two cases below, which use cardinal votes, simply recast known results into our setting. The latter three use ordinal votes and are novel contributions.

Case 1 (Many alternatives, votes are cardinal means).

There are arms, and voter provides their cardinal votes for each arm , where is the mean of ’s observations for arm .

Solution.

Had our aggregation rule seen the pulls itself, its MLE estimate of the true mean would be the mean observed reward, which can be computed directly from the available information:

so that . ∎

Note that , so that each estimate is weighted inversely proportional to its variance, . The use of inverse variance to weight independent cardinal estimates is well known Bates and Granger (1969); Dickinson (1973); Fleiss (1993); Perrone and Cooper (1992); Tresp and Taniguchi (1995).

Case 2 (2 alternatives, votes are cardinal differences).

There are arms, and each voter provides their estimate of the cardinal difference between arms.

Solution.

As and are independent, we have that . To combine the votes we take the weighted mean with weights proportional to the inverse variances Bates and Granger (1969), so that . ∎

Unlike Case 1, where the was irrelevant to , the weights in Case 2 depend on . We assumed above that this ratio is known; if not, one might infer the ratio from data. An interesting corollary is that a voter that wishes to maximize the weight of her vote should pull each of the arms equally. If all voters adopt this strategy, we do not need estimates of the variances of the arms and can just weigh each vote in proportion to voter experience.

Case 3 (2 alternatives, votes are ordinal ranks).

There are arms, and each voter provides an ordinal ranking indicating that they value higher than (i.e., ).

Solution.

As above, we have . Denoting the CDF of by

, and defining the binary variable

, we have

(the Bernoulli distribution parameterized by

evaluated at ). Our votes consist of a set of samples . Since adding a constant to the underlying means has no effect on the likelihood, a direct inference about is impossible and we instead seek to estimate the difference . Defining, , we want to choose to maximize the log-probability of the data (since which maximizes the log likelihood also maximizes the likelihood):

We could try to optimize directly with respect to by setting , but this appears analytically intractable:

However, since is concave (proof in Appendix), its gradient evaluated at points in the direction of the MLE solution and we can use this fact to find corresponding to MLE estimate of by evaluating (see Appendix for details):

so that . ∎

Case 4 (Many alternatives, votes are ordinal ranks).

There are arms, and voter provides an ordinal ranking indicating whether they prefer to (i.e., whether ) for all pairs .

Approximate solution.

Though we were unable to solve this case exactly, we take advantage of a naive independence assumption (a la Naive Bayes

Lewis (1998)) to arrive at a plausible, approximate aggregation rule. We will confirm in Section 5 that it empirically outperforms the baselines. As above, we define binary variable (indicating that is preferred to ), so that votes are a set of samples , and assume:

Assumption (Naive independence).

For all and distinct pairs and , variables and are independent.

This assumption is never true for . To see this, consider the alternatives , and note that and imply (by transitivity of the underlying cardinal values), which violates independence. Nevertheless, by using this assumption, we can apply our Case 3 strategy by rewriting the probability of the data as a sum over the probabilities of the pairwise votes:

where , as above, and is the probability of observing the voters’ pairwise comparisons between and given (ignoring other alternatives). Noting that and , we can apply our Case 3 solution to find the partial derivatives of , evaluated at , with respect to and . Summing across alternative pairs yields:

(1)

The above solution might be improved by examining the following failure mode, which arises on account of the Naive Independence assumption. If the best alternative is observed significantly less often than the second best alternative —i.e., —the second best alternative will tend to receive more positive weight, even if all voters report the correct pairwise ordering. For example, in the case of one voter, if that voter reports the correct ordering for three alternatives with counts (for the top alternative), , and , the above approximate solution will choose the second best alternative. This is obviously a bad outcome. To avoid it, we propose that each alternative’s weight be normalized by the total absolute weight it would otherwise received, yielding normalized weights:

where is defined as above. Our experiments test both the unnormalized () and normalized () versions of the rule. ∎

Case 5 (Many alternatives, votes are top choice only).

There are arms, and each voter provides their top choice indicating that they most prefer (i.e., ).

Approximate solutions.

Let and denote the Gaussian PDF and CDF for . We have that the probability of voter selecting alternative is equal to the probability that the largest order statistic of (where denotes the set ) is less than :

The log-likelihood of the data is therefore:

where the expectation for each voter is taken with respect to that voter’s top choice (we are abusing notation slightly, as differs across voters). While there appears to be no way to maximize this analytically Hill ([n. d.]), we can compute the gradient with respect to :

where the third and fourth equalities use the log derivative trick of the REINFORCE Williams (1992) gradient estimator together with the product rule, and , , and are defined accordingly:

We thus have a method for Monte Carlo estimation of the gradient of the log likelihood. Each term has an intuitive justification. represents a weight for each voter. A voter whose vote is in line with the current guess of the underlying arm means has less weight on the gradient. is part of the typical REINFORCE Williams (1992) objective and corresponds to increasing the probability in regions where the score (product of the CDFs of the remaining arms) is high. Finally is the correction term that appears due to the dependence of the score function on the arm means. incentivizes decreasing the means of the arms that are not voted for. Initializing , we can either compute the gradient once and take the maximal component to be the winner (as in Case 3, but without the optimality guarantee) or use the gradient ascent algorithm to find an optimum. We will do the former, and call it the Case 5 “Monte Carlo approximation.”

In terms of implementation, , , and are straightforward and can be done with a library that computes density functions for Gaussians. In particular, we note that each component of consists of the product of CDFs and a single PDF.

As computing a good approximation using the Monte Carlo strategy can be expensive and requires known pull variance , we propose two analytical approximations that only require the ratio to be known. First, noting that events and are positively correlated for all , we have , which gives the lower bound:

We can now apply the same argument as in Case 4 and approximately optimize this lower bound by following its gradient at . This leads to same weights as our unnormalized Case 4 rule (i.e., weigh votes according to equation 1) for observed comparisons (i.e., all pairs involving each voter’s top choice). We call this the Case 5 “lower bound approximation”.

A second approach makes the following simple observation: at , a gradient step in the direction of maximizing also increases . To evaluate the gradient of at , one can run through a computation similar to Case 3, or simply take the limit of the Case 3 weight as one of the counts goes to . This yields for the top choices each arm has equal observation variance (with for ). We call this the Case 5 “zero approximation”.

Both analytical approximations are a bit crude. The lower bound approximation ignores significant dependencies, and the zero approximation doesn’t factor in counts of non-selected alternatives. In both cases we use the gradient at , but unlike in Case 3 where this is justified by concavity, there is no similarly strong justification here. Nevertheless, we will see in our experiments that both approximations improve over plurality baselines. ∎

4.2. Learning an Aggregation Rule

Can we come up with a rule for the many alternative, ordinal rank case (Case 4) that does not rely on the Naive Independence assumption? Although we were unable to do so analytically, we propose to learn an aggregation rule from data. This rule will serve as a useful baseline for our derived rules, and the approach is flexible, in that it can be trained on data generated by any noise model (e.g., a -armed bandit with uniform observation noise). As an additional benefit, the learned rule will output a distribution over outcomes (our derived rules output point estimates).

We require our learned rule to apply in the case of an arbitrary number of voters and alternatives. Ideally, our rule should be a function that is order invariant with respect to voters, and order equivariant with respect to alternatives (permuting the alternatives permutes the results in the same way). Both properties were studied by Zaheer et al.’s work on Deep Sets Zaheer et al. (2017), which investigated the expressiveness of the order invariant sum decomposition and proposed simple neural network layers to model equivariant functions. An alternative approach to accommodating variable numbers of voters and alternatives would be to use recurrent architectures such as LSTMs Hochreiter and Schmidhuber (1997) with respect to each dimension, but this would be sensitive to their orderings.

We adopt the Deep Set architecture , where the input of the -th voter is an matrix, where is the number of alternatives and is the number of features representing each alternative’s count and vote information. We use equivariant functions (in terms of the alternatives) for both the encoder and decoder , and take the sum across the voters. The decoder terminates in a softmax. This architecture satisfies all desiderata outlined above and outputs a proper distribution over outcomes. We train the network to minimize a negative log likelihood (cross entropy) loss where the targets are ground truth best outcome. Training was done via gradient descent for up to 5000 mini-batches of size 128, generated as described in our high variance experiment (Subsection 5.1

) with a different random number of voters (sampled uniformly between 5 and 350) and different number of alternatives (sampled uniformly between 5 and 15) for each mini-batch. We tested 20 random hyperparameter configurations from a search space of 144, and kept the model with the lowest loss. See the Appendix for further details, including specific hyperparameters and a full description of our final architecture.

Figure 2. Architecture for our learned aggregation rule, based on Deep Sets Zaheer et al. (2017). For each voter , vote features (votes and count information) are embedded via permutation equivariant . Embedded votes are aggregated across voters using permutation invariant and passed through a permutation equivariant to produce alternative scores.
Num voters 3 10 30 100 300
Case 1 Oracle 1.1642 0.8625 0.5356 0.2390 0.0936
Borda 1.2116 0.9689 0.6629 0.3308 0.1385
Borda+ 1.1863 0.9493 0.6555 0.3270 0.1369
Case 4 1.1760 0.9194 0.6069 0.2890 0.1177
Case 4 (normalized) 1.1879 0.9231 0.6058 0.2886 0.1173
Learned 1.1687 0.9086 0.5935 0.2788 0.1125
Plurality 1.3509 1.2116 1.0089 0.6905 0.3807
Plurality+ 1.3232 1.1904 0.9888 0.6721 0.3680
Case 5 (lower bound) 1.2903 1.1547 0.9434 0.6224 0.3302
Case 5 (zero approx) 1.2847 1.1458 0.9297 0.6074 0.3193
Case 5 (Monte Carlo) 1.2848 1.1413 0.9278 0.6066 0.3178
(a) High Variance
Num voters 3 10 30 100 300
Case 1 Oracle 0.1075 0.0312 0.0102 0.0030 0.0011
Borda 0.1754 0.0631 0.0217 0.0068 0.0023
Borda+ 0.1688 0.0590 0.0205 0.0062 0.0021
Case 4 0.1711 0.0603 0.0208 0.0064 0.0022
Case 4 (normalized) 0.1479 0.0487 0.0163 0.0050 0.0017
Learned 0.1767 0.0605 0.0211 0.0065 0.0022
Plurality 0.3147 0.0898 0.0290 0.0086 0.0029
Plurality+ 0.2586 0.0839 0.0285 0.0086 0.0029
Case 5 (lower bound) 0.2112 0.0740 0.0253 0.0078 0.0026
Case 5 (zero approx) 0.2071 0.0726 0.0253 0.0077 0.0026
Case 5 (Monte Carlo) 0.2089 0.0739 0.0256 0.0078 0.0026
(b) Low Variance
Table 1. Average regret, , in ideal conditions. Lower is better. Best non-Oracle rules of each type in bold.
3 10 30 100 300
Case 1 Oracle 1.1726 0.8811 0.5579 0.2539 0.1002
Learned (noisy) 1.1728 0.9131 0.6012 0.2861 0.1157
Borda 1.2116 0.9689 0.6629 0.3308 0.1385
Borda+ 1.2100 0.9643 0.6583 0.3281 0.1369
Case 4 1.1780 0.9194 0.6090 0.2918 0.1184
Case 4 (normalized) 1.1902 0.9234 0.6090 0.2912 0.1188
Learned 1.1793 0.9262 0.6161 0.2964 0.1210
Plurality 1.3509 1.2116 1.0089 0.6905 0.3807
Plurality+ 1.3248 1.1889 0.9902 0.6740 0.3681
Case 5 (lower bound) 1.2931 1.1557 0.9466 0.6260 0.3312
Case 5 (zero approx) 1.2874 1.1485 0.9369 0.6160 0.3241
Case 5 (Monte Carlo) 1.2959 1.1548 0.9409 0.6167 0.3243
(a) 50% count noise
3 10 30 100 300
Case 1 Oracle 1.2194 0.9883 0.6901 0.3492 0.1470
Learned (noisy) 1.1940 0.9433 0.6321 0.3080 0.1267
Borda 1.2116 0.9689 0.6629 0.3308 0.1385
Borda+ 1.2100 0.9654 0.6589 0.3298 0.1374
Case 4 1.1926 0.9423 0.6319 0.3079 0.1270
Case 4 (normalized) 1.1996 0.9433 0.6323 0.3080 0.1269
Learned 1.1972 0.9481 0.6390 0.3131 0.1291
Plurality 1.3509 1.2116 1.0089 0.6905 0.3807
Plurality+ 1.3331 1.1973 0.9971 0.6834 0.3751
Case 5 (lower bound) 1.3294 1.2165 1.0402 0.7405 0.4255
Case 5 (zero approx) 1.3113 1.1747 0.9701 0.6524 0.3504
Case 5 (Monte Carlo) 1.3067 1.1758 0.9780 0.6592 0.3548
(b) 33% count replacement
Table 2. Average regret, , in noisy conditions. Lower is better. Best non-Oracle rules of each type in bold.

5. Experiments

In this section we compare our derived and learned rules to common voting rules in settings of varying uncertainty. We find that our rules consistently outperform anonymous rules, even when there is significant count noise. Code to replicate the experiments is available online at https://github.com/spitis/objective_social_choice.

5.1. Ideal, High Variance Conditions

For this experiment, we generate 100,000 instances of the multi-armed bandit problem with 10 alternatives for different numbers of voters (3, 10, 30, 100, and 300). The ground truth mean of each arm is sampled as and the voter counts are sampled uniformly between 1 to 50. Individual observations are sampled with high variance from . The voters then report an ordinal ranking based on their estimated means for each alternative.

We compare the performance of our Case 4 and Case 5 rules as well as our learned voting rule to several baselines: basic plurality vote and Borda count Brandt et al. (2016), naively-modified Plurality vote and Borda count (“Plurality+” and “Borda+”), and a Case 1 oracle. The plurality baseline sets for voter ’s top choice , and for . The Borda baseline sets . The Plurality+ and Borda+ baselines take the best performing modification of the basic Plurality and Borda baselines, where the modification uses the count information in an unjustified but plausible way. The tested modifications include weighing each voter’s scores by: the arithmetic mean of that voter’s counts

, the harmonic mean of the counts, or, in each case, the square root and logarithm thereof. The Case

1 oracle sees each voter’s cardinal estimate and acts as an upper bound on performance. For our Case 5 Monte Carlo approximation we averaged 100 samples from , which we found performed almost as well as 1000 samples and made simulations cheaper. In all cases, ties are broken by random selection.

Performance, as measured by regret, is shown in Table 0(a). Relative performance in terms of accuracy (not shown) is approximately the same. The different voting rules are grouped according to access to votes and auxiliary information. Our rules consistently beat anonymous baselines. Among pairwise rules, we observe that our learned aggregation rule has the best overall non-oracle performance, but note that the two Case 4 rules are quite close in performance to the learned rule and are significantly cheaper to compute. The Case 5 results show that the zero approximation is consistently better than the lower bound, and very close to the Monte Carlo approximation (which should give near optimal performance). Finally, we note that all pairwise rules outperform all plurality rules (including Case 5). This is not surprising, as plurality rules use less information than pairwise rules.

5.2. Ideal, Low Variance Conditions

We now consider the same experiment as above under lower observation variance. Instead of sampling observations from , we sample them from . The purpose here is two-fold. First, our learned aggregation rule, which was the best performing rule in high variance conditions, was trained in those exact conditions, and we hypothesize its performance will deteriorate out of domain. Second, we note that the failure mode of the unnormalized Case 4 is exacerbated by low variance, and hypothesize that the normalized rule will perform relatively better.

The results, shown in Table 0(b), confirm our hypotheses. As compared to the high variance case, both the learned aggregation rule and the unnormalized version of our Case 4 rule do significantly worse relative to the Borda baseline. The normalized Case 4 rule does significantly better than other rules in low variance conditions. Interestingly, the Case 5 zero approximation does slightly better in low variance conditions than the Case 5 Monte Carlo approximation; this suggests that accurate Monte Carlo approximation requires more samples under low observation variance and that the zero approximation is near optimal.

5.3. Noisy Count Conditions

We now relax our assumption of perfect count information by introducing significant noise into the counts that are observed by our rules. This impacts all rules except the anonymous baselines (plurality and Borda). We experiment with two types of count noise: percentage noise applied to all counts, and resampled counts. In the percentage noise case, we adjust all reported counts by a percentage between and (sampled independently and uniformly), rounding to the nearest integer. In the resampled counts case, we replace one third of the reported counts with resampled values (i.e., an integer between 1 and 50). Otherwise, we follow the same procedure as before. To get an idea of how well we could do if the noise were to be expected, we retrain our learned aggregation rule on data generated according to the percentage noise case (but not the resampled counts case). The experiments for this subsection utilize high observation variance (). The results are shown in Tables 1(a) and 1(b).

In case of count noise, it is unsurprising that the neural network trained under those conditions does best. What is perhaps surprising is how robust the derived rules are to count noise. Both Case 4 and Case 5 rules beat their respective baseline by a respectable margin, even with inaccurate counts. The same trend continues in the case of count replacement, where our Case 4 rules outperform the Oracle (which is no longer a true Oracle). We note, however, that performance declines more sharply in the count replacement case, which is to be expected since the per-count noise is biased. It is interesting to note that the Case 5 zero approximation is more robust to noise than the Monte Carlo approximation. Overall, the results indicate that even inaccurate count information can have significant value.

6. Conclusion

In this paper, we proposed a generic framework for objective social choice, which seeks to estimate a cardinal ground truth given noisy votes. We considered a bandit-based noise model and proposed several voting rules that utilize auxiliary count information to improve inference relative to anonymous rules. Our empirical results confirm the efficacy of our rules relative to anonymous baselines and demonstrate robustness under noise in the auxiliary information.

The scope of the present work assumes that voters have independent information and is limited to a particular noise model and mode of auxiliary information (experience counts). It would be interesting to extend our objective social analysis to cases of dependent information, more general noise models (e.g., noise generated by a contextual bandit Lattimore and Szepesvári (2018)), and other forms of auxiliary information (e.g., a similarity kernel between voters). Another extension might study group composition Hong and Page (2004): if we have some control over voter experience, how should we influence the group of voters to improve voting outcomes? We leave these angles to future work. We thank Nisarg Shah for his guidance throughout this project. We also thank Jimmy Ba, Harris Chan, Mufan Li and the anonymous referees for their helpful comments.

References

  • (1)
  • Arrow (2012) Kenneth J Arrow. 2012. Social choice and individual values. Vol. 12. Yale university press.
  • Arrow et al. (2010) Kenneth J Arrow, Amartya Sen, and Kotaro Suzumura. 2010. Handbook of social choice and welfare. Vol. 2. Elsevier.
  • Bates and Granger (1969) John M Bates and Clive WJ Granger. 1969. The combination of forecasts. Journal of the Operational Research Society 20, 4 (1969).
  • Brandt et al. (2016) Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D Procaccia. 2016. Handbook of computational social choice. Cambridge University Press.
  • Caragiannis and Micha (2017) Ioannis Caragiannis and Evi Micha. 2017. Learning a Ground Truth Ranking Using Noisy Approval Votes.. In IJCAI. 149–155.
  • Caragiannis et al. (2016) Ioannis Caragiannis, Ariel D Procaccia, and Nisarg Shah. 2016. When do noisy votes reveal the truth? ACM Transactions on Economics and Computation (TEAC) 4, 3 (2016), 15.
  • Chen et al. (2017) Richard Y Chen, Szymon Sidor, Pieter Abbeel, and John Schulman. 2017. UCB exploration via Q-ensembles. arXiv preprint arXiv:1706.01502 (2017).
  • Christiano et al. (2017) Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems. 4299–4307.
  • Clemen (1989) Robert T Clemen. 1989. Combining forecasts: A review and annotated bibliography. International journal of forecasting 5, 4 (1989).
  • Condorcet (1785) Marie J Condorcet. 1785. Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix. de l’Imprimerie Royale.
  • Conitzer (2013) Vincent Conitzer. 2013. The maximum likelihood approach to voting on social networks. In 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 1482–1487.
  • Conitzer and Sandholm (2005) Vincent Conitzer and Tuomas Sandholm. 2005. Common Voting Rules As Maximum Likelihood Estimators. (2005), 8. http://dl.acm.org/citation.cfm?id=3020336.3020354
  • Dickinson (1973) JP Dickinson. 1973. Some statistical results in the combination of forecasts. Journal of the Operational Research Society 24, 2 (1973).
  • Dickinson (1975) JP Dickinson. 1975. Some comments on the combination of forecasts. Journal of the Operational Research Society 26, 1 (1975).
  • Dietterich (2000) Thomas G Dietterich. 2000. Ensemble methods in machine learning. In

    International workshop on multiple classifier systems

    . Springer.
  • Fleiss (1993) JL Fleiss. 1993. Review papers: The statistical basis of meta-analysis. Statistical methods in medical research 2, 2 (1993).
  • Genest and McConway (1990) Christian Genest and Kevin J McConway. 1990. Allocating the weights in the linear opinion pool. Journal of Forecasting 9, 1 (1990).
  • Genest et al. (1986) Christian Genest, James V Zidek, et al. 1986. Combining probability distributions: A critique and an annotated bibliography. Statist. Sci. 1, 1 (1986).
  • Gompers et al. (2003) Paul Gompers, Joy Ishii, and Andrew Metrick. 2003. Corporate governance and equity prices. The quarterly journal of economics 118, 1 (2003), 107–156.
  • Granger (1989) Clive WJ Granger. 1989. Invited review combining forecasts—twenty years later. Journal of Forecasting 8, 3 (1989).
  • Granger and Ramanathan (1984) Clive WJ Granger and Ramu Ramanathan. 1984. Improved methods of combining forecasts. Journal of forecasting 3, 2 (1984).
  • Harsanyi (1955) John C Harsanyi. 1955. Cardinal welfare, individualistic ethics, and interpersonal comparisons of utility. Journal of political economy 63, 4 (1955).
  • Hill ([n. d.]) Joshua E Hill. [n. d.].

    The minimum of n independent normal distributions.

    ([n. d.]).
  • Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735–1780.
  • Hong and Page (2004) Lu Hong and Scott E Page. 2004. Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences 101, 46 (2004), 16385–16389.
  • Jacobs (1995) Robert A Jacobs. 1995. Methods for combining experts’ probability assessments. Neural computation 7, 5 (1995).
  • Kang et al. (2018) Dongyeop Kang, Waleed Ammar, Bhavana Dalvi, Madeleine van Zuylen, Sebastian Kohlmeier, Eduard Hovy, and Roy Schwartz. 2018. A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Association for Computational Linguistics, New Orleans, Louisiana, 1647–1661. https://doi.org/10.18653/v1/N18-1149
  • Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
  • Lattimore and Szepesvári (2018) Tor Lattimore and Csaba Szepesvári. 2018. Bandit algorithms. preprint (2018).
  • Lewis (1998) David D Lewis. 1998. Naive (Bayes) at forty: The independence assumption in information retrieval. In European conference on machine learning. Springer.
  • Mallows (1957) Colin L Mallows. 1957. Non-null ranking models. I. Biometrika 44, 1/2 (1957).
  • May (1952) Kenneth O May. 1952. A set of independent necessary and sufficient conditions for simple majority decision. Econometrica: Journal of the Econometric Society (1952), 680–684.
  • McConway (1981) Kevin J McConway. 1981. Marginalization and linear opinion pools. J. Amer. Statist. Assoc. 76, 374 (1981), 410–414.
  • Perrone and Cooper (1992) Michael P Perrone and Leon N Cooper. 1992. When networks disagree: Ensemble methods for hybrid neural networks. (1992).
  • Procaccia and Rosenschein (2006) Ariel D Procaccia and Jeffrey S Rosenschein. 2006. The distortion of cardinal preferences in voting. In International Workshop on Cooperative Information Agents. Springer.
  • Procaccia et al. (2015) Ariel D Procaccia, Nisarg Shah, and Eric Sodomka. 2015. Ranked voting on social networks. In

    Twenty-Fourth International Joint Conference on Artificial Intelligence

    .
  • Rokach (2010) Lior Rokach. 2010. Ensemble-based classifiers. Artificial Intelligence Review 33, 1-2 (2010).
  • Sen (2018) Amartya Sen. 2018. Collective choice and social welfare. Harvard University Press.
  • Tresp and Taniguchi (1995) Volker Tresp and Michiaki Taniguchi. 1995. Combining estimators using non-constant weighting functions. In Advances in neural information processing systems. 419–426.
  • Von Neumann and Morgenstern (1953) John Von Neumann and Oskar Morgenstern. 1953. Theory of games and economic behavior. (1953).
  • Wallis (2011) Kenneth F Wallis. 2011. Combining forecasts–forty years later. Applied Financial Economics 21, 1-2 (2011).
  • Weymark (1991) John A Weymark. 1991. A reconsideration of the Harsanyi–Sen debate on utilitarianism. Interpersonal comparisons of well-being 255 (1991).
  • Williams (1992) Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning 8, 3-4 (1992), 229–256.
  • Young (1988) H Peyton Young. 1988. Condorcet’s theory of voting. American Political science review 82, 4 (1988).
  • Yue et al. (2012) Yisong Yue, Josef Broder, Robert Kleinberg, and Thorsten Joachims. 2012. The k-armed dueling bandits problem. J. Comput. System Sci. 78, 5 (2012), 1538–1556.
  • Zaheer et al. (2017) Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. 2017. Deep sets. In Advances in neural information processing systems. 3391–3401.

Appendices

Appendix A Notation Glossary

finite set of alternatives
optimal alternative
alternative chosen by rule
Bernoulli distribution with parameter
number of observations of arm by voter
total observations from arm ,
auxiliary information (context) space
in the 2 alternative case
erf error function
social choice rule
noise process
number of voters
number of alternatives
normal distribution with mean and variance
probability of the observed data (votes)
probability of data considering only and

normal cumulative distribution function

normal probability density function

an observation from arm ,
the variance of arm
space of valid ground truth objective functions
ground truth objective function
weight for voter ’s choice in 2 alternative case
weight given on account of voter to alternative
observation space (votes of all votes)
cardinal estimate of by voter ,
mean of voter ’s observations of alternative
the binary variable
a sample of
the binary variable

Appendix B Derivations

b.1. Case 3 Details

Case 3 (2 alternatives, votes are ordinal ranks).

There are arms, and each voter provides an ordinal ranking indicating that they value higher than (i.e., ).

As above, we have . Denoting the CDF of by , and defining the binary variable , we have (the Bernoulli distribution parameterized by evaluated at ). Our votes consist of a set of samples . Since adding a constant to the underlying means has no effect on the likelihood, a direct inference about is impossible and we instead seek to estimate the difference . Defining, , we want to choose to maximize the log-probability of the data:

Noting that , we could try to optimize directly with respect to by setting , but this appears intractable:

However, since is concave (proof below), its derivative evaluated at points in the direction of the MLE solution and we can use this fact to find corresponding to MLE estimate of by evaluating at . The intuition behind this trick is best understood visually—see Figure 3 (left). To evaluate the sign of , we note that and the result follows:

All that remains is for us to show that is concave. This ensures that it has a global maximum (possibly at , if all votes agree) and that the gradient evaluated at 0 reveals its direction (see Figure 3 (left)). There are few ways to prove concavity. We do so by showing that the second derivative of is negative everywhere. To do so, we use the following definitions:

We have:

so that, using the quotient rule:

Now, appears in with weight of either 0 or positive 1, and appears in with weight of either 0 or negative 1. Thus, to show that (the second derivative of ) is negative everywhere, we can show that is negative for all values of and and that is positive for all values of and . The denominator in each is always positive and can be ignored. The numerator contains an always negative factor of , which can be cancelled if we reverse the sign. Finally, we can multiply both functions by (which maintains the sign, since is positive)—this allows us to consider the resulting functions as functions of the single variable . The proof thus reduces to showing that both of the following two functions are positive for all values of :

This can be done visually (by plotting), or analytically, by showing that the derivative of the first (second) function is strictly positive (negative) and that the functions have limit zero as and , respectively.

Figure 3. As is concave in (blue curve), its partial derivative evaluated at (red line) points in the direction of the MLE solution (yellow star).

Appendix C Details Of Learned Aggregation

We adopt the Deep Set architecture:

where the input of the -th voter is an matrix, where is the number of alternatives and is the number of features representing each alternative’s count and vote information. To encode count and vote information we use a single real-valued feature for each, so that , and the th alternative for the th voter has features and . For counts, we normalize count values to be in by dividing by the maximum count value used in our experiments, so that the count feature for voter ’s alternative is

. For votes, we linearly interpolate between

and , so that voter ’s top ranked alternative has feature , and the bottom ranked alternative has feature .

We use the same parametric form of equivariant function for both the encoder and decoder , which is the same form proposed and used by Zaheer et al. (2017). Letting be a convolutional layer parameterized by , each equivariant layer is computed as , where is an order invariant function computed feature-wise across the input. The aggregation operation is taken across the voters, and the decoder terminates in a softmax.

We train the network to minimize a negative log likelihood (cross entropy) loss where the targets are the ground truth outcomes. Training was done via gradient descent, using the Adam optimizer Kingma and Ba (2014), for up to 5000 mini-batches of size 128, generated as described in our high variance experiment (Subsection 5.1) with a different random number of voters (sampled uniformly between 5 and 350) and different number of alternatives (sampled uniformly between 5 and 15) for each mini-batch.

To settle on a particular network configuration, we tested 20 random hyperparameter configurations from a search space of 144, and kept the model with the lowest loss. The search space consisted of the product of:

  • learning_rate

  • num_encoder_layers

  • num_decoder_layers

where the configuration with the lowest final loss (used in our experiments) is marked. This same configuration was used to train the “noisy” network for Subsection 5.3. We note that the next best configuration (with 3 encoder layers and 1 decoder layer) achieved very similar performance (1.018 versus 0.998 loss).