1 Introduction
Social choice theory [4] studies voting rules (also known as social choice or social welfare functions) that compute a winning alternative or a ranking of the available alternatives from voter preferences. Typically, the preference of each voter is supposed to be a ranking over all available alternatives. We deviate from this assumption and, instead, we focus our attention to settings in which each voter (or, better, agent for our purposes) ranks only a small subset of the alternatives. Such incomplete rankings seem to be nonstandard in the literature; [7, 8, 15] are some notable exceptions.
The setting we have in mind is motivated by crowdsourcing [10]
and rating applications. For example, assume that a requester would like to rank a huge set of alternatives using expert opinions from a crowd of workers. Asking each worker for her opinion on the whole set of alternatives (i.e., for a full ranking) would rather result in poor information. Most probably, the worker will not be aware of most of the alternatives. Even if she tries to obtain additional information, coming up with consistent comparisons between alternatives that she knows well and alternatives that she has no idea about, would be rather impossible, given their huge number. Instead, this task would be much easier if workers focused on small sets of alternatives. The requester could give each worker a different set of few alternatives to rank. Then, processing smaller inputs would be easier for the requester as well.
This approach has been recently exploited in the context of ordinal peer grading in MOOCs; see [5, 6, 14, 16] for approaches of this flavour. In such settings, the task of grading an exam with many participating students is outsourced to the students themselves. Each student is given a small number of exam papers to rank and the final grading (a ranking of all students) is obtained by aggregating the inputs provided by the students.
In a rating application we envision, users of a hotel booking system are asked to rank hotels they have stayed recently in a specific city and the rating application aims to compute a full ranking of the hotels (or, possibly, different rankings for different relevant criteria). Clearly, each user can provide meaningful feedback for just a few hotels. Again, in this scenario, the system might ask each user to focus only on a subset of the hotels she knows.
Besides the different sets of alternatives each individual is asked to rank in the above scenarios, another implicit feature is that there is an underlying true ranking of all alternatives (e.g., the ranking of exam papers in terms of their quality or the ranking of hotels in terms of their facilities) that we would like to compute when aggregating the individual preferences. Can we do so using simple votinglike rules? We follow an optimization approach which can be described with the following question: Assuming that we have partial knowledge of the underlying true ranking and access to sampled profiles, which is the rule that yields an outcome that is as consistent as possible to (our partial knowledge of) the underlying true ranking when applied to the sampled profiles?
We study the above question for positional scoring rules (or, simply, scoring rules), which have played a central role in social choice theory. Two factors that have led to this decision are their simplicity and effectiveness; simplicity follows by their definition and effectiveness is justified by our experimental results. In particular, we consider settings in which each agent is asked to rank the same number
of alternatives. A positional scoring rule in our setting is defined by a scoring vector
. It takes as input the incomplete individual rankings of the agents and computes scores for alternatives as follows. An alternative gets points each time it is ranked th by an agent and its score is its total number of points. The final ranking is obtained by ordering all alternatives in terms of their scores, in nonincreasing order. Now, a profile of individual incomplete rankings and desired relations for pairs of alternatives (to be thought of as parts of the underlying true ranking) with corresponding weights indicating the importance of each relation, we would like to compute the positional scoring rule whose outcome when applied on the profile maximizes the total weight of the desired pairwise relations it satisfies. This is related to learningtheoretic studies where a scoring rule that as consistent as possible to given examples is sought; e.g., see the paper by Boutilier et al. [2] and Procaccia et al. [13]. The main difference of the current paper (besides our assumption on profiles with incomplete rankings) is its optimization flavour. We refer to this seemingly fundamental optimization problem as OptPSR.Our technical contribution is as follows. We present an exact algorithm that solves OptPSR in time that depends exponentially only on the parameter (Section 3). Hence, our algorithm runs in polynomial time when is constant. For instances with high values of , we show that a simple approval voting rule (that uses the scoring vector with s followed by s) yields a approximate solution. We show that this bound is tight by constructing an instance in which any approval voting rule is approximate. We prove that OptPSR is hard to approximate and present an explicit inapproximability bound of . This result follows by an approximationpreserving reduction from the problem MAX3LIN2 of maximizing the number of satisfied equations in an overdetermined system of linear equations modulo and exploits a famous inapproximability result due to Håstad [9]. These results can be found in Section 4. In Section 5, we describe experiments on realworld and synthetic profiles. Our experimental results show that scoring rules perform remarkably well and recover almost of the desired constraints in many interesting scenarios; this justifies our choice to study scoring rules (and the optimization problem OptPSR) in the first place. We begin with preliminary definitions in Section 2 and conclude with open problems in Section 6.
2 Problem statement
We consider settings with a set of agents and a set of alternatives . Agent expresses her preference over a subset of alternatives; her preference is a ranking of the alternatives in . A preference profile (or simply, a profile) consists of the preferences of all agents. In this work, we assume that all agents have the same number of alternatives in their preference, i.e., for each agent .
A social welfare function takes as input a profile and it outputs a ranking of all alternatives in . A positional scoring rule (or, simply, a scoring rule) is a social welfare function that uses a scoring vector with for and ; the alternative at position in each vote is assigned points and the ranking of the alternatives is produced by ordering them in monotone nonincreasing order in terms of their total points (or score). Formally, for an alternative , let denote the number of agents that rank at position in profile . Then, given a scoring rule , the score of alternative is defined as
We also assume that we have access to a set of constraints that represents our (possibly partial) knowledge to an objective set of pairwise relations between the alternatives. Each constraint in
is given by an ordered pair of alternatives
, has a corresponding nonnegative weight (of importance) , and requires that alternative is ranked higher than alternative in the outcome of the scoring rule . For a pair of alternatives , let . Now, observe that, in order for alternative to be ranked above with certainty in the final ranking, it must be and, equivalently,Using , the above expression can be compactly written as the dot product .
For our purposes, instead of thinking of a profile as the set of rankings provided by the agents, it is convenient to describe it using the quantities for every constraint in ; we use the notation to denote the set of these quantities and will simply refer to it as the profile.
Now, problem OptPSR (standing for “optimizing positional scoring rules”) is defined as follows. We are given a profile and a set of constraints. The goal of OptPSR is to find the scoring rule that produces a ranking of all alternatives so that the total weight (or gain)
of satisfied constraints is maximized. The quantity takes value if is true and otherwise.
Let us now give an equivalent view of OptPSR. A scoring rule can be thought of as a point in , and, in particular, in the region of formed by the inequalities for and that define all valid scoring vectors. We can define subregions of by considering any subset of constraints and the inequality for every constraint associated with the pair of alternatives and the inequality for every constraint . In this way, the collection of all subsets of constraints in partition into disjoint subregions (of course, some of them may be infeasible). Hence, in order to maximize , it suffices to find any point in the nonempty subregion of that satisfies the subset of constraints with maximum total weight.
To do so, we can enumerate all subsets of constraints of
, check feasibility of the corresponding regions using linear programming, and report any point in the subregion that yields the highest gain. This algorithm takes time polynomial in
and , assuming that it receives and as input. In the next section, we will present an algorithm that uses a more clever enumeration of the feasible subregions in order to get the one that yields the maximum gain.3 An improved OptPSR algorithm
We will present another (exact) OptPSR algorithm whose running time depends exponentially only on the parameter and, hence, is polynomial when is a constant.
The algorithm computes a pool of nonempty subregions of , each of which satisfies a different subset of constraints. Initially, the pool consists of region only and is updated as new constraints of are considered. When a new constraint is considered, each region in the current pool can be split into two subregions consisting of the points that satisfy the constraint and those that do not satisfy it, respectively; the whole region is retained in the pool if all its points satisfy or (exclusively) do not satisfy the constraint.
In particular, the algorithm considers the constraints of one by one. At each step of the algorithm, a pool of regions is kept; at the beginning of each step, all regions in the pool are active. For each region in , the algorithm keeps the gain that is obtained by the constraints which have been considered until step and are satisfied by scoring vectors of region . The algorithms begin its execution having only region in the pool. When a new constraint with weight is considered, the algorithm attempts to update each active region of as follows. It defines the candidate regions and such that

is defined by the inequalities that form together with inequality (that defines the set of points that satisfy constraint ), and

is defined by the inequalities that form together with inequality (that defines the set of points that do not satisfy constraint ).
If both and are nonempty (i.e., the corresponding sets of inequalities are feasible), the algorithm includes both and in as inactive, sets their gains and , and removes region from the pool. If only is feasible (and is infeasible), is increased by . If only is feasible, the algorithm does nothing. In the last two cases, no new region is added to the pool. Clearly, it cannot be the case that both and are infeasible. Note that feasibility can be checked efficiently by solving linear programs with variables and up to constraints. At the end of step (i.e., when there is no other active region in the pool to be considered), the inactive regions become active and the algorithm proceeds with step .
When all constraints of have been considered, the algorithm computes the active region with maximum and returns any scoring vector in . An example of an execution of the algorithm with is depicted in Figure 1.
Theorem 1.
Given an instance of OptPSR consisting of a set of constraints and a profile , the algorithm above correctly returns a solution in time .
Proof.
The correctness of the algorithm should be apparent. It considers the whole space of points in which corresponds to scoring vectors and divides it into all (sub)regions defined for every inclusionmaximal subset of constraints that are satisfied simultaneously. Among all these regions, it finds the one with points that correspond to scoring vectors that satisfy constraints of with maximum total weight.
Expanding into the regions in the pool when the last constraint of is considered can be thought of as a noncomplete binary tree with nodes corresponding to regions (see Figure 1 for an example). is rooted at a node corresponding to and is such that each node at level , corresponding to a region , has two children at level if the region was split in and replaced by two subregions at step and has one child otherwise (indicating that the region was retained in the pool during step ). The total time required to find all regions is proportional to the size of . Since all nonleaf nodes have at least one child, the size of is at most its height times the number of leaves. The number of leaves is essentially the number of different nonempty regions, which is upperbounded by the number of different sign patterns that the quantities define for each constraint in . Since these quantities are linear functions over the coordinates of vector , a result due to Alon [1] (see also Warren [17]) yields that the total number of different sign patterns is at most . For each of the nodes of , feasibility can be checked by solving two linear programs with variables and at most constraints in time . The theorem follows. ∎
By Theorem 1, we obtain the following corollary. For comparison, the naive algorithm presented at the end of the previous section is polynomial in the very special case where is at most logarithmic in .
Corollary 2.
The algorithm solves instances of OptPSR with constant in polynomial time.
4 Approximating OptPSR
As the running time of the exact algorithm of the previous section depends exponentially on , our aim here is to design much faster (i.e., polynomialtime) algorithms that compute approximate OptPSR solutions. As we will see, an extremely simple scoring rule achieves a approximation, i.e., the total weight of the constraints it satisfies is at least times the total weight satisfied by an optimal OptPSR solution. For , the approval rule is a positional scoring rule that uses the scoring vector that has in the first positions and in the remaining ones.
Theorem 3.
For every instance of OptPSR with parameter , there exists some so that approval is a approximate solution. This bound is tight.
Proof.
For the upper bound, consider a profile and set of constraints . We partition the constraints of into disjoint sets so that the th set is defined as
for . Observe that the approval rule satisfies all constraints in the set . Actually, set is defined as the set of constraints that are satisfied by approval but not by approval for . Hence, the sets are disjoint and there exists (with being the scoring vector of approval) such that . As the maximum possible gain cannot exceed , we have that approval is approximate as desired.
For the lower bound, we will present an OptPSR instance such that any approval, with , is (at least) approximate. The instance has pairs of alternatives as constraints with for . We will build a profile so that the approval rule satisfies only constraint , while there exists a scoring rule that simultaneously satisfies all constraints. Consider quantities with positive integer values such that . The profile is defined as follows:

Alternative appears times in position , and alternative appears times in position . This means that , and for .

For , alternative appears times in position , and alternative appears once in position and times in position . This means that , , and for .

Alternative appears times in position , and alternative appears once in position . This means that , and for .

The rest of the positions in the votes are filled with additional alternatives that do not appear in the constraints.
Observe that, for , we have that , and for . Hence, the approval rule satisfies only constraint for a total weight of .
Now we will show that there exists a scoring rule that satisfies all constraints. Let be some arbitrary small constant and consider the scoring vector with , , and for . First, observe that this is a valid scoring rule since, it is clear that for all and, furthermore, it can be easily seen that as well. Moreover, this scoring rule satisfies all constraints since for every . Hence, any approval is approximate. ∎
On the negative side, we show that our problem is not only hard, but also hard to approximate within some constant.
Theorem 4.
For every constant , OptPSR is hard to approximate within .
Proof.
We use a reduction from MAX3LIN2, the problem of maximizing the number of satisfied equations in an overdetermined system of linear equations modulo . An instance of MAX3LIN2 consists of binary variables and equations of the forms and , where denotes addition modulo and its objective is to find an assignment to the variables so that the number of satisfied equations is maximized. Below, we use the term equation to refer to an equation of the form (for ).
Given an instance of MAX3LIN2, our reduction constructs in polynomialtime an instance of OptPSR that has a scoring rule that satisfies constraints of total weight if and only if the MAX3LIN2 instance has an assignment satisfying equations. A famous result by Håstad [9] states that it is hard to distinguish in time polynomial in and whether a given instance of MAX3LIN2 has an assignment that satisfies at least equations or any assignment satisfies at most equations, for any constant . As a consequence of our reduction, we obtain that it is hard to distinguish between instances of OptPSR that have a scoring rule that satisfies constraints of total weight at least and instances of OptPSR in which the total weight of the constraints satisfied by any scoring rule is at most . An inapproximability bound of (for every constant ) then follows by standard arguments.
Without loss of generality, we can assume that the scoring vectors , that we seek for, have and the remaining scores are defined in terms of variables as (or, consequently, ) for so that . Hence, a constraint requiring that the score of is higher than the score of can be expressed as a linear inequality of the variables with . The assumption that allows for inequalities that have nonzero constant terms. We define linear inequalities corresponding to constraints first; later, we also construct the profile and specify the constraints as pairs of alternatives and corresponding weights that are consistent to these linear inequalities. Let be the number of equations in which variable participates. We set to be a small constant such that and is an integer. The instance of OptPSR can be expressed with the following inequalities that represent constraints:

For every variable , we have the four inequalities , , and of weight each.

For every equation, there are four inequalities of unit weight each:

if the equation is of the form , the inequalities are , , and , and

if the equation is of the form , the inequalities are , , and .

These inequalities are implemented as follows. Let . For every variable , with , we have four constraints , where , of weight each. In the profile, alternatives and appear in specific positions as follows:

Alternative appears once in position , and alternative appears once in position . Then, the constraint corresponds to the inequality or, equivalently, .

Alternative appears once in position and times in position , and alternative appears times in position . The constraint corresponds to the inequality or .

Alternative appears times in position , and alternative appears once in position and times in position . The constraint corresponds to the inequality or .

Alternative appears times in position and times in position , and alternative appears times in position . The constraint corresponds to the inequality or .
Observe that three of the four inequalities corresponding to variable can simultaneously be satisfied when , and only two of them are satisfied for any other value of .
For every equation , we have four constraints , where , of unit weight each. In the profile, these alternatives appear in specific positions depending on whether equation is a  or a equation. In the case where it is a equation of the form , we have:

Alternative appears once in positions , and , and alternative appears once in positions , and . Then, the constraint corresponds to the inequality or, equivalently, .

Alternative appears times in position and times in positions , and , and alternative appears times in positions , and . The constraint corresponds to the inequality or .

Alternative appears times in positions , and , and alternative appears times in position and times in positions , and . The constraint corresponds to the inequality or .

Alternative appears times in position and times in positions , and , and alternative appears times in positions , and . The constraint corresponds to the inequality or .
Observe that three of the inequalities corresponding to a equation can simultaneously be satisfied when ; otherwise, exactly two inequalities are satisfied. In the case where equation is a equation of the form , we have:

Alternative appears times in positions , and , and alternative appears once in position and times , and . Then, the constraint corresponds to the inequality or, equivalently, .

Alternative appears times in position and times in positions , and , and alternative appears times in positions , and . The constraint corresponds to the inequality or .

Alternative appears times in positions , and , and alternative appears times in position and times in positions , and . The constraint corresponds to the inequality or .

Alternative appears times in position and times in positions , and , and alternative appears times in positions , and . The constraint corresponds to the inequality or .
Again, for every equation, we have three inequalities that can simultaneously be satisfied when ; otherwise, exactly two inequalities are satisfied.
In order for this profile to be valid, we use sufficiently many agents and additional alternatives as placeholders, so that the alternatives mentioned above have the appropriate number of appearances in the rankings.
We now prove that there exists a variable assignment for the MAX3LIN2 instance that satisfies of its equations if and only if there exists a scoring rule that satisfies constraints of total weight . As we have discussed above, this is enough to complete the proof.
Consider an assignment that satisfies of the equations. Then, the scoring rule defined by setting for satisfies:

three out of the four inequalities corresponding to any variable , since when and when ;

three out of the four inequalities corresponding to any satisfied equation since when and when ;

two out of the four inequalities corresponding to any unsatisfied equation since in that case;

three out of the four inequalities corresponding to any equation since when and when ;

two out of the four inequalities corresponding to any unsatisfied equation since then.
Hence, the total weight of the constraints satisfied is since due to the fact that all equations have variables and the sum accounts for the total number of appearances of all variables.
Conversely, assume that we are given a scoring rule that satisfies constraints of total weight ; we will show that there exists an assignment to the variable of the MAX3LIN2 instance that satisfies equations. First, we show that we can transform the scoring rule into a (possibly) different one with or for , without decreasing the total weight of the satisfied constraints.
For a variable we have that the satisfied inequalities are the following: exactly two out of the four variable inequalities and at most three out of the four inequalities for each of the equations in which the variable appears. This gives a weight of at most . By setting , exactly three out of the four variable inequalities and at least two out of the four equation inequalities in which appears are satisfied, for a total weight of at least . Clearly, there is no loss in weight after this change in the value of .
Now, we slightly modify the variable values as follows: for all variables we set and for all variables we set . The set of inequalities containing that were satisfied before the modification are still satisfied after the update as well. This is trivial for the variable inequalities. For an equation inequality of the form (respectively, ) that was satisfied before the modification, at most (respectively, at least ) of the three variables have values in before the modification. Clearly, the inequality is satisfied after the modification as well.
So, we can assume that we have total weight of from satisfied constraints with the variables taking values in . Hence, comes as weight from satisfied variable inequalities (with three satisfied inequalities per variable). Then, the remaining weight comes from satisfied equation inequalities. The definition of the reduction implies that there exist equations in the MAX3LIN2 instance so that three among the four corresponding inequalities are satisfied. Then, it is easy to inspect that, if three among the four equation inequalities are satisfied when variables take values in , then the assignment satisfies their corresponding equation as well. This yields an assignment with (at least) satisfied equations and the proof is complete. ∎
5 Experiments
We have conducted experiments for two different scenarios; we refer to them as ppl and col. In these two scenarios, we used as alternatives countries and cities, respectively. In both cases, the alternatives were used to define different sets consisting of six alternatives each. The alternatives have been distributed to the different sets almost uniformly; each country appears in at least and at most sets and each city appears in at least and at most sets.
We used both realworld and synthetic data. Realworld data were collected as input from participants in a technology exhibition at our home institution. Each of them was given two distinct sets of six countries and six cities. They were asked to rank the countries in terms of their population and the cities in terms of their cost of living. Synthetic data were obtained by simulating agents who use the PlackettLuce and BradleyTerry noise models in order to compute random rankings.
The BradleyTerry model [3] (BT, in short) is used by an agent in order to decide relations between all pairs of alternatives in her set as follows. Consider a pair of alternatives with corresponding utilities (populations or cost of living indices) and . The agent decides to rank above with probability and above with probability . If the relative ranks of all pairs of alternatives (that have been computed separately) do not define a ranking, the whole process is repeated.
In the PlackettLuce model [11, 12] (PL, in short), an agent decides the ranking of the alternatives in her set sequentially. Let be the set of alternatives the agent has to rank. Starting from the first position, the next undetermined position in the ranking is filled by alternative with probability . After a random selection, the chosen alternative is removed from and the process continues for the next undetermined position and the remaining alternatives until all positions are filled.
The set of constraints were defined using population data for the countries from en.wikipedia.org and cost of living index data for the cities from numbeo.com as retrieved in April 2016. In particular, in the ppl scenario, we have a constraint for each pair of countries and so that is more populous than . We consider two different weightings for constraints using weight that is either or equal to the population difference between countries and . Unit weights are used when we care only about maximizing the number of correctly recovered population comparison between countries. However, there might be pairs that are really important to recover correctly, while some others are not that important. For example, it is important to conclude that China is ranked above Switzerland (their population difference is billions) but an error in the comparison between Cuba and Belgium (both with populations around millions) would not be that severe. Analogously, in the col scenario, we have a constraint for every pair of cities and so that has higher cost of living index than . The weight of the corresponding constraint is either or equal to the cost of living index difference between the two cities.
Since all the profiles we experimented with have , one would expect that the exact algorithm presented in Section 3 would be the obvious choice in order to come up with the optimal scoring rule. Unfortunately, for the size of OptPSR instances we considered (with constraints for ppl and
constraints for col), this algorithm turned out to be really slow, even after implementing several heuristics that yield minor improvements to performance. This rather disappointing outcome together with the fact that
is small, forced us to consider scoring vectors with discretized scores (e.g., which are multiples of or ) in order to come up with approximations of the optimal scoring rule. This approach has yielded the vectors and for the ppl profile with unweighted and weighted constraints and the vectors and for the col profile with unweighted and weighted constraints, respectively.We compare the optimal OptPSR solution (obtained in this way) to several wellknown scoring rules such as the Borda count (defined by the scoring vector ), the harmonic rule (also known as Dowdall; defined by the scoring vector ), and approval rules. Tables 1 and Table 2 show the performance of these scoring rules in all the OptPSR instances we experimented with.
In Table 1, which contains data for instances with unweighted constraints, we observe that Borda and Harmonic outperform all approval rules in all cases besides the col scenario with PL agents, where approval is slightly better than Harmonic. Also, there are cases (e.g., ppl profile with BT agents) where Borda performs better than Harmonic and vice versa (e.g., see the results for realworld data). In all scenarios, the values of Borda, Harmonic and the best approval rule have an average difference of , and , respectively, from the optimal values (with maximum difference values of , and that are all observed for the col profile with PL agents). Interestingly, approval rules have amazingly better performance than what their worstcase analysis from Theorem 3 indicates.
Clearly, Table 2 shows significantly better results from (almost) all scoring rules on OptPSR instances with weighted constraints. This is to be expected since solutions improve significantly when heavy pairwise relations are correctly recovered. Here, Borda, Harmonic and the best approval rule are all closer to the optimal performance. Now, the average distance is , , and ; again, the largest differences are observed for the col profile with PL agents.


real data  synthetic (BT)  synthetic (PL)  
rule  ppl  col  ppl  col  ppl  col 
opt.  81.83  83.97  94.54  93.74  93.19  88.20 
borda  79.87  81.43  93.16  91.03  91.59  84.67 
harm.  80.94  82.54  92.84  91.34  90.50  83.13 
1app.  77.75  78.09  83.69  87.89  83.90  75.72 
2app.  78.19  79.36  89.71  90.65  88.72  81.29 
3app.  79.43  80.48  91.69  90.93  90.30  83.73 
4app.  77.57  79.68  90.00  89.44  89.70  83.88 
5app.  73.14  72.86  81.08  84.45  82.83  80.66 
6app.  32.53  51.43  32.54  51.43  32.54  51.43 



real data  synthetic (BT)  synthetic (PL)  
rule  ppl  col  ppl  col  ppl  col 
opt.  95.98  92.93  99.66  98.69  99.48  96.02 
borda  94.56  91.57  99.49  97.66  99.23  93.96 
harm.  95.42  92.01  99.42  97.80  99.02  92.58 
1app.  94.85  89.84  98.59  96.48  97.86  86.34 
2app.  95.24  90.40  99.29  97.67  98.86  91.50 
3app.  93.68  90.83  99.05  97.70  99.04  93.37 
4app.  92.63  90.06  97.46  96.91  98.38  93.51 
5app.  84.61  82.00  87.94  93.81  91.69  90.99 
6app.  39.66  57.04  39.88  56.96  39.88  56.96 

6 Open problems
Our work reveals several open problems. First, we would like to determine the approximability of OptPSR. Is there a polynomial time algorithm with constant approximation ratio? Second, we would like to design an exact algorithm that is practical. Our ambitious goal is to be able to solve OptPSR instances like the ones we used in our experiments. Third, we would like to analyze theoretically scoring rules in random profiles that have been produced by PlackettLuce or BradleyTerry agents. Also, considering agents following other noise models that are close to realworld agents definitely deserves investigation.
References
 [1] N. Alon. Tools from higher algebra. In R. L. Graham, M. Grötschel, and L. Lovász, editors, Handbook of Combinatorics, volume 2, pages 1749–1783. MIT Press, 1996.
 [2] C. Boutilier, I. Caragiannis, S. Haber, T. Lu, A. D. Procaccia, and O. Sheffet. Optimal social choice functions: A utilitarian view. Artificial Intelligence, 227:190–213, 2015.
 [3] R. A. Bradley and M. E. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345, 1952.
 [4] F. Brandt, V. Conitzer, U. Endriss, J. Lang, H. Moulin, and A. D. Procaccia. Handbook of Computational Social Choice. Cambridge University Press, 2016.
 [5] I. Caragiannis, G. A. Krimpas, and A. A. Voudouris. Aggregating partial rankings with applications to peer grading in massive online open courses. In Proceedings of the 14th International Conference on Autonomous Agents & Multiagent Systems (AAMAS), pages 675–683, 2015.
 [6] I. Caragiannis, G. A. Krimpas, and A. A. Voudouris. How effective can simple ordinal peer grading be? In Proceedings of the 17th ACM Conference on Economics and Computation (EC), pages 323–340, 2016.
 [7] M. M. de Weerdt, E. H. Gerding, and S. Stein. Minimising the rank aggregation rrror. In Proceedings of the 15th International Conference on Autonomous Agents & Multiagent Systems (AAMAS), pages 1375–1376, 2016.
 [8] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In Proceedings of the 10th International World Wide Web Conference (WWW), pages 613–622, 2001.
 [9] J. Håstad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001.
 [10] E. Law and L. von Ahn. Human computation. Morgan & Claypool Publishers, 2011.
 [11] R. D. Luce. Individual choice behavior: A theoretical analysis. Wiley, 1959.
 [12] R. L. Plackett. The analysis of permutations. Journal of the Royal Statistical Society. Series C (Applied Statistics), 24(2):193–202, 1975.
 [13] A. D. Procaccia, A. Zohar, Y. Peleg, and J. S. Rosenschein. The learnability of voting rules. Artificial Intelligence, 173(1213):1133–1149, 2009.
 [14] K. Raman and T. Joachims. Methods for ordinal peer grading. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 1037–1046, 2014.
 [15] D. Sculley. Rank aggregation for similar items. In Proceedings of the 7th SIAM International Conference on Data Mining (SDM), pages 587–592, 2007.
 [16] N. B. Shah, J. K. Bradley, A. Parekh, M. Wainwright, and K. Ramchandran. A case for ordinal peerevaluation in MOOCs. In Neural Information Processing Systems (NIPS): Workshop on Data Driven Education, 2013.
 [17] H. Warren. Lower bounds for approximation by nonlinear manifolds. Transaction of the American Mathematical Society, 133:167–178, 1968.
Comments
There are no comments yet.