Algorithmic Techniques for Necessary and Possible Winners

05/14/2020 ∙ by Vishal Chakraborty, et al. ∙ Cole Normale Suprieure Technion University of California Santa Cruz NYU college 0

We investigate the practical aspects of computing the necessary and possible winners in elections over incomplete voter preferences. In the case of the necessary winners, we show how to implement and accelerate the polynomial-time algorithm of Xia and Conitzer. In the case of the possible winners, where the problem is NP-hard, we give a natural reduction to Integer Linear Programming (ILP) for all positional scoring rules and implement it in a leading commercial optimization solver. Further, we devise optimization techniques to minimize the number of ILP executions and, oftentimes, avoid them altogether. We conduct a thorough experimental study that includes the construction of a rich benchmark of election data based on real and synthetic data. Our findings suggest that, the worst-case intractability of the possible winners notwithstanding, the algorithmic techniques presented here scale well and can be used to compute the possible winners in realistic scenarios.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The theory of social choice focuses on the question of how preferences of individuals can be aggregated in such a way that the society arrives at a collective decision. It has been of interest throughout the history of humankind, from the analysis of election manipulation by Pliny the Younger in Ancient Rome, to the Century studies of voting rules by Jean-Charles de Borda and Marquis de Condorcet, and the more recent ground-breaking work on dictatorial vote aggregation by Kenneth Arrow in the 1950s. Over the past two decades, computational social choice has been developing as an interdisciplinary area between social choice theory, economics, and computer science, where the central topics of study are the computational and algorithmic perspectives of voting challenges such as vote aggregation [DBLP:reference/choice/0001CELP16].

A voting rule determines how the collection of voter preferences over a set of candidates is mapped to the set of winning candidates (the winners). Among the most extensively studied is the class of positional scoring rules, where every candidate receives a score from every voter that is determined only by the position of the candidate in the voter’s ranking. A candidate wins if she achieves the highest total score — the sum of scores it receives from each voter.

A particularly challenging computational aspect arises in situations in which voter preferences are only partial (i.e., they can be modeled as partial orders). This might happen since, for example, voters may be undecided about some candidates or, simply, only partial knowledge of the voter’s preference is available (e.g., knowledge is inferred indirectly from opinions on issues). The problem already manifests itself at the semantic level: what is the meaning of vote aggregation in the presence of incompleteness, if voting rules require complete knowledge? For this reason, Konczak and Lang [konczak2005voting] introduced the notions of necessary winners and possible winners as the candidates who win in every completion, and, respectively, at least one completion of the given partial preferences.

This work led to a classification of the computational complexity of the necessary and possible winners for a large variety of voting rules [DBLP:journals/ipl/BaumeisterR12, DBLP:journals/jcss/BetzlerD10, DBLP:journals/jair/XiaC11]. Specifically, under (efficiently computable) positional scoring rules, the necessary winners can be computed in polynomial time via the algorithm of Xia and Conitzer [DBLP:journals/jair/XiaC11]. The possible winners can be computed in polynomial time under the plurality and veto rules, but their computation is NP-hard for every other pure rule, as established in a sequence of studies [DBLP:journals/ipl/BaumeisterR12, DBLP:journals/jcss/BetzlerD10, konczak2005voting, DBLP:journals/jair/XiaC11]. Here, pure

means that the scoring vector for

candidates is obtained from that for candidates by inserting a new score into the vector.

In this paper, we explore the practical aspects of computing the necessary and possible winners. Specifically, we investigate the empirical feasibility of this challenge, develop algorithmic techniques to accelerate and scale the execution, and conduct a thorough experimental evaluation of our techniques. For the necessary winners, we show how to accelerate the Xia and Conitzer algorithm through mechanisms of early pruning and early termination. For the possible winners, we focus on positional scoring rules for which the problem is NP-hard. We first give a natural polynomial-time reduction of the possible winners to Integer Linear Programming (ILP) for all positional scoring rules. Note that ILP has been used in earlier research on the complexity of voting problems as a theoretical technique for proving upper bounds (fixed-parameter tractability) in the parameterized complexity of the possible winners [DBLP:conf/ijcai/BetzlerHN09, DBLP:conf/pods/KimelfeldKT19, DBLP:conf/ecai/Yang14] or in election manipulation problems involving complete preferences [DBLP:journals/eor/PolyakovskiyBN16]. Here, we investigate the use of ILP solvers to compute the possible winners in practice. Our experiments on a leading commercial ILP solver (Gurobi v8.1.1) show that the reduction produces ILP programs that are often too large to load and too slow to solve. For this reason, we develop several techniques to minimize or often eliminate ILP computations and, when the use of ILP is unavoidable, to considerably reduce the size of the ILP programs.

We conduct an extensive experimental study that includes the construction of a rich benchmark of election data based on both real and synthetic data. Our experimental findings suggest that, the worst-case intractability of the possible winners notwithstanding, the algorithmic techniques presented here scale well and can be used to compute the possible winners in realistic scenarios. An important contribution of our work that is of independent interest is a novel generative model for partially ordered sets, called the Repeated Selection Model, or RSM for short. We believe that RSM may find uses in other experimental studies in computational social choice.

2 Preliminaries and Earlier Work

Voting profiles

Let be a set of candidates and let be a set of voters. A (complete) voting profile is a tuple of total orders of , where each represents the ranking (preference) of voter on the candidates in . Formally, each is a binary relation on that is irreflexive (i.e., , for all ), antisymmetric (i.e., implies , for all , transitive (i.e., and imply , for all ), and total (i.e., or holds for all ). Similarly, a partial voting profile is a tuple of partial orders on , where each represents the partial preferences of voter on the candidates in ; formally, each is a binary relation on that is irreflexive, antisymmetric, and transitive (but not necessarily total). A completion of a partial voting profile is a complete voting profile such that each is a completion of the partial order , that is to say, is a total order that extends . Note that, in general, a partial voting profile may have exponentially many completions.

Voting rules

We focus on positional scoring rules, a widely studied class of voting rules. A positional scoring rule on a set of candidates is specified by a scoring vector of non-negative integers, called the score values, such that . Suppose that is a total voting profile. The score of a candidate on is the value where is the position of candidate in . The score of under the positional scoring rule on the total profile is the sum . A candidate is a winner if ’s score is greater than or equal to the scores of all other candidates; similarly, is a unique winner if ’s score is greater than the scores of all other candidates. The set of all winners is denoted by .

We consider positional scoring rules that are defined for every number of candidates. Thus, a positional scoring rule is an infinite sequence of scoring vectors such that each is a scoring vector of length . Alternatively, a positional scoring rule is a function that takes as argument a pair of positive integers with and returns as value a non-negative integer such that . We assume that the function is computable in time polynomial in , hence the winners can be computed in polynomial time. Such a rule is pure if the scoring vector of length is obtained from the scoring vector of length by inserting a score in some position of , provided that the decreasing order of score values is maintained. We also assume that the scores in every scoring vector are co-prime (i.e., their greatest common divisor is ), since multiplying all scores by the same value does not change the winners.

As examples, the plurality rule is given by scoring vectors of the form , while the veto rule is given by scoring vectors of the form . The plurality rule is the special case of the -approval rule, in which the scoring vectors start with ones and then are followed by zeros. In particular, the -approval rule has scoring vectors of the form . The Borda rule, also known as the Borda count, is given by scoring vectors of the form .

Necessary and possible winners

Let be a voting rule and a partial voting profile.

  • The set of the necessary winners with respect to and is the intersection of the sets , where varies over all completions of . Thus, a candidate is a necessary winner with respect to and , if is a winner in for every completion of .

  • The set of the possible winners with respect to and is the union of the sets , where varies over all completions of . Thus, a candidate is a possible winner with respect to and , if is a winner in for at least one completion of .

The notions of necessary unique winners and possible unique winners are defined in analogous manner. The preceding notions were introduced by Konczak and Lang [konczak2005voting]. Through a sequence of subsequent investigations by Xia and Conitzer [DBLP:journals/jair/XiaC11], Betzler and Dorn [DBLP:journals/jcss/BetzlerD10], and Baumeister and Rothe [DBLP:journals/ipl/BaumeisterR12], the following classification of the complexity of the necessary and the possible winners for all pure positional scoring rules was established.

Theorem 1.

[Classification Theorem] The following statements hold.

  • If is a pure positional scoring rule, there is a polynomial-time algorithm that, given a partial voting profile , returns the set of necessary winners.

  • If is the plurality rule or the veto rule, there is a polynomial-time algorithm that, given a partial voting profile , returns the set of possible winners. For all other pure positional scoring rules, the following problem is NP-complete: given a partial voting profile and a candidate , is a possible winner w.r.t.  and ?

Furthermore, the same classification holds for necessary unique winners and possible unique winners.

In the preceding theorem, the input partial voting profiles consist of arbitrary partial orders. There has been a growing body of work concerning the complexity of the possible winners when the partial voting profiles are restricted to special types of partial orders. The main motivation for pursuing this line of investigation is to determine whether or not the complexity of the possible winners drops from NP-complete to polynomial time w.r.t. some scoring rules (other than plurality and veto), if the input voting profiles consist of restricted partial orders that also arise naturally in real-life settings. We now describe two types of restricted partial orders and state relevant results.

Definition 1.

Let be a partial order on a set .

  • We say that is a partitioned preference if can be partitioned into disjoint subsets so that the following hold:

    (a) for all , if and , then ;

    (b) for each , the elements in are incomparable under , that is to say, and hold, for all .

  • We say that is a partial chain if it consists of a linear order on a non-empty subset of .

Partitioned preferences relax the notion of a total order by requiring that there is a total order between sets of incomparable elements. As pointed out by Kenig [DBLP:conf/atal/Kenig19], partitioned preference “were shown to be common in many real-life datasets, and have been used for learning statistical models on full and partial rankings.” Furthermore, partitioned preferences contain doubly-truncated ballots as a special case, where there is a complete ranking of top elements, a complete ranking of bottom elements, and all remaining elements between the top and the bottom elements are pairwise incomparable. This models, for example, the setting in which a voter has complete rankings of some top candidates and of some bottom candidates, but is indifferent about the remaining candidates in the "middle". Partial chains arise in settings where there is a large number of candidates, but a voter has knowledge of only a subset of them. For example, a voter may have a complete ranking of movies that the voter has seen, but, of course, does not know how to compare these movies with movies that the voter has not seen. Partial chains also model the setting of an election in which one or more candidates enter the race late, and so a voter has a complete ranking of the original candidates but does not know yet how to rank the new candidates who entered the race late.

Let be a pure positional scoring rule. We write PW-PP to denote the restriction of the possible winners problem w.r.t.  to partial voting profiles consisting of partitioned preferences. More formally, PW-PP is the following decision problem: given a partial voting profile consisting of partitioned preferences and a candidate , is a possible winner w.r.t.  and ? Similarly, we write PW-PC to denote for the restriction of the possible winners problem w.r.t.  to partial voting profiles consisting of partial chains.

Kenig [DBLP:conf/atal/Kenig19] established a nearly complete classification of the complexity of the PW-PP problem for pure positional scoring rules. In particular, if is the -approval rule, then PW-PP is solvable in polynomial time. In fact, the tractability of holds for all -valued rules, that it, positional scoring rules in which the scoring vectors contain just two distinct values. If is the Borda rule, however, then PW-PP is NP-complete. In fact, results in [betzler2011unweighted, davies2011complexity] imply that the possible winners problem w.r.t. the Borda rule is NP-complete, even when restricted to input partial voting profiles consisting of doubly truncated ballots. As regards partial chains, it was shown recently in [chakraborty2020complexity] that the classification in theorem 1 does not change for the PW-PC problem. In other words, for every positional scoring rule other than plurality and veto, PW-PC is NP-complete. In particular, PW-PC is NP-complete if is the 2-approval rule or the Borda rule.

Our experimental evaluation will focus on the plurality rule, the -approval rule, and the Borda rule. For this reason, we summarize the aforementioned complexity results concerning these rules in the following table (also listing the veto rule for completeness).

Scoring Rule PW-PP PW-PC (all kinds)
Plurality & Veto P P P P
-approval NP-complete P NP-complete P
Borda NP-complete NP-complete NP-complete  P

Table 1: Complexity of the possible winners (PW) and necessary winners (NW) problems, and their restrictions to partitioned preferences (PW-PP) and partial chains (PW-PC) w.r.t. plurality, veto, -approval, and Borda rules.

3 Necessary Winners

Xia and Conitzer [DBLP:journals/jair/XiaC11] presented a polynomial-time algorithm for determining whether a particular candidate is a necessary winner (NW) in an election that uses a positional scoring rule , that is, whether . We recall it in Algorithm 1. We will then present several performance optimizations that allow us to efficiently compute the set of necessary winners.

For a partial order and a candidate , we let and . (Note that both and include .) Further, for a pair of candidates and with , we write for the set of candidates ranked between and , including and .

1:  for each partial order  do
2:     for each candidate  do
3:        compute ) and )
4:     end for
5:  end for
6:  for each  do
7:     Initialize
8:     for each partial order  do
9:        if   then
10:            is the lowest possible position for
11:            is the highest possible position for
12:           
13:           
14:        else if  then
15:           slide between positions and , find positions and that minimize
16:           
17:           
18:        end if
19:     end for
20:     if  then
21:         is not a necessary winner, return false
22:     end if
23:  end for
24:   is a necessary winner, return true
Algorithm 1

Note that Algorithm 1 returns true if is a necessary winner, not only if it’s a necessary unique winner. To return true only if is the necessary unique winner, line 20 should be replaced by .

We now present several performance optimizations that allow us to efficiently compute the set

of necessary winners. Our optimizations are of two kinds. The first kind is based on reusing computation across candidates, and on heuristically re-ordering computation. The second kind uses the structure of a given partial voting profile to optimize the computation of

and .

Reusing and reordering computation.

A straight-forward way to use Algorithm 1 to compute is to execute it times, once for each candidate.

To eliminate redundant computation, we first compute and record the and of each and once. We will explain how to compute and efficiently later in this section. Additionally, we compute and record the best possible score of each candidate, .

Next, we execute competitions between pairs of candidates and , deliberately selecting only the promising candidates as , and prioritizing strong opponents . Specifically, only the candidates that have the highest can become necessary winners. Further, we sort potential opponents in decreasing order of .

Computing and

This part of the computation takes polynomial time, but the details of this computation are left unspecified by Xia and Conitzer. In our implementation, we use BFS algorithm to compute these sets for all candidates of a given partial profile , maintaining intermediate results in a priority queue.

We also observe that the structure of can be used to make this computation more efficient in some common cases. In particular, and can be computed in time for linear forests (i.e., posets in which every candidate has at most one parent and at most one child) and for partitioned preferences, where candidates are partitioned into sets , and where provides a complete order over the sets but does not compare candidates within a set. A common example of partitioned preferences are top- preferences, where the first sets are of size 1, and the final set is of size . Alternatively, and computation can be avoided altogether in these cases, since scores of and that minimize can be determined directly.

In summary,

while the optimizations described in this section do not reduce the asymptotic running time of the already polynomial computation in the general case, they are useful in practice, as we will demonstrate experimentally in Section 6.3 . As we explain in the next section, we use these and similar techniques to optimize the performance of , making this computation practically feasible.

4 Possible Winners

4.1 Computing PW for Plurality and Veto

By Theorem 1, for Plurality and Veto, there are polynomial-time algorithms for telling if a given candidate is a possible winner. In fact, Betzler and Dorn [DBLP:journals/jcss/BetzlerD10] gave such an algorithm for plurality by efficiently transforming the detection of possible winners to a network flow problem with just two layers and with integral capacities along the edges of the network. We have implemented and optimized this algorithm by, among other things, eliminating obvious winners (candidates ranked first in over half of the partial orders in ) and obvious losers (candidates ranked first in fewer than partial orders), thus reducing the size of the network. A variant of this algorithm can be used to detect possible winners for veto.

4.2 Reducing PW to ILP

Again by Theorem 1, for all positional scoring rules other than plurality and veto, detecting possible winners is a NP-complete problem. Here, we give a polynomial-time reduction of the Possible Winners problem to Integer Linear Programming (ILP) and, in fact, to 0-1 ILP. Let be a positional scoring rule and let be its scoring vector for candidates. Consider an input to the possible winners problem consisting of a set of candidates, a partial voting profile , and a distinguished candidate from ; the question is whether or not .

  • For each with and each with , introduce binary variables . Intuitively, we want to have if candidate has rank in a completion of ; otherwise, . Thus, the rank of in is equal to

  • There are two constraints to ensure the validity of a completion of , namely, each candidate is assigned exactly one rank in , and no two candidates are assigned the same rank in .

    (1)
    (2)
  • If a candidate is ranked higher than a candidate in the partial order , then has to also be ranked higher in a completion of . This is ensured by introducing the following constraint for each such pair of candidates and each partial order.

    (3)
  • Finally, to ensure that the distinguished candidate is a possible winner, we add, for each candidate , the following constraint:

    (4)

Let be the preceding ILP instance. Note that has binary variables and constraints.

Note also that for the case of possible unique winners, one has Thus, the only change needed is to replace the inequality in (4) by a strict one.

We want to show that a 0-1 solution to exists if and only if candidate is a possible winner. We first prove a few facts about the reduction and then prove the desideratum. In the following, for a set , we let denote the set of all total orders on . We also let be the ranking function that returns the rank of in a total order on .

Lemma 1.

Let and a set of partial votes. For each partial vote , let be a total order that extends . Consider the following

(5)

The values have the following properties:

  1. ;

  2. For , we have that .

Proof.

These properties are a consequence of the way the ranking function . For , in a completion , of the partial vote , each candidate is assigned a unique rank. Therefore, for a , for a fixed candidate there is exactly one such that By definition of the values , only the others are Thus,
For property (2), observe that assigns to each rank, an unique candidate. In the extension , of a given vote for a fixed , there exists exactly one such that Therefore, for , we have that

Lemma 2.

Suppose are non negative integers such that

  1. For we have

Let such that if and only if . This induces a total order on

Proof.

Fix a . Conditions (1) and (2) ensure that each is mapped to exactly one Conditions (1) and (3) ensure that for each some is mapped to it. Thus, is a one-to-one-correspondence. ∎

Observe that when ’s are defined as in Equation (5),

Furthermore, for a scoring vector the score of in is

Lemma 3.

Let be a partial order on and be a total order on Suppose that we have values as defined as in Equation (5). For all in we have the following inequalities for

if and only if

Proof.

Fix a Suppose, in Further, suppose, that in a total order the following holds This implies that , i.e., , or . This means that in But this is also the case in Therefore,
Let By definition, for all in we have in . Therefore, , i.e., . This is nothing but . ∎

Consider a profile and the scoring vector The total score of a , by definition, is which is

The above along with the definition of a possible winner makes the following fact quite obvious.

Lemma 4.

Let be a profile as above and let be a fixed candidate. Then the following statements are equivalent.

  1. is a winner in using the scoring rule .

  2. For every candidate we have that,

Now we will prove the main theorem.

Theorem 2.

The following statements are equivalent.

  1. Candidate is a possible winner w.r.t. the rule and the partial profile .

  2. The system has a 0-1 solution.

Proof.

Assume that a partial order such that is a possible winner. Set

(6)

We claim that the assignment satisfy all the equations of the system
Indeed, from Lemma 1, we know that this satisfies the following constraints

  1. For we have

Since , by definition, . By Lemma 3, the constraints are satisfied. Since is a possible winner in , by Lemma 4, the constraints

are satisfied.
Assume that the system has the integer solution . The each is either or by the first group of constraints. Furthermore, by Lemma 2, the constraints

(7)
(8)

ensure that each vote induces a total order on Furthermore, the total order extends because of the constraints Finally, since the constraints

are satisfied, Lemma 4 implies that is a possible winner. ∎

We illustrate the preceding result in the following concrete cases.

Borda rule

The scoring vector for Borda is . Thus, the constraints in (4) become

(9)
-approval

The scoring vector for -approval assigns a score of to each of the first ranked candidates, and to the remaining ones. Thus, the constraints in (4) become

(10)

4.3 Checking a Possible Winner

Determining whether or not using our methodology involves the following two main steps.

  1. Construct the ILP model. Constraints (1) and (2) depend only on and , whereas, constraints (3) and (4) depend additionally on and on the partial profile.

  2. Solve the ILP model.

Fix the values for and . One creates a partial model for the corresponding with only constraints (1) and (2). This is called pre-processing. To save time, pre-processed models can be reused when the candidate , the partial profile , or both change. To solve a specific problem, one loads the appropriate pre-processed model, and updates it by adding constraints (3) and (4) before solving it.

4.4 Three-phase Computation of the Set of Possible Winners

A straight-forward way to compute the set of possible winners is to execute the computation described in Section 4.3 above times, once for each candidate. We now describe a more efficient method that uses pruning and early termination techniques, and heuristics to quickly identify clear possible winners. This method involves three phases:

  1. Use to identify a subset of possible winners , and to prune clear non-winners . Pass the remaining to the next phase.

  2. Use a heuristic to construct a completion in which is a winner. Add all candidates for which such a completion is found to , and pass the remaining to the next phase.

  3. Invoke the subroutine described in Section 4.3 to check a possible winner for each using an ILP solver. Add all identified possible winners to .

The final set of possible winners is .

Phase 1: Using the Necessary Winner algorithm.

Let us denote by the sum of scores of all candidates in some total voting profile. We will execute to compute the set of necessary winners, which are also possible winners. Recall that as part of the computation, we compute and record, for all , the best possible score . We can immediately identify candidates whose is highest as possible winners, and add them to . Further, if , then is also a possible winner, and is added to .

On the other hand, if then is not a possible winner (by pigeonhole principle), and it can be pruned. Further, consider the step in where we execute competitions between pairs of candidates and . As we compute and , we may observe that . This allows us to prune as a non-winner, adding it to .

Phase 2: Constructing a completion.

Next, given a candidate , we consider and heuristically attempt to create a total voting profile that completes and in which is the winner. If such a is found, then is added to . To construct , we complete each partial vote independently, as follows:

(1) For a given , place at the worst possible rank in which it achieves its best possible score. The reason for this is to minimize the scores of the items in .

(2) Place the remaining candidates from into . If multiple placements are possible, chose one that increases the score of the currently highest-scoring candidates the least.

(3) Keep a list of candidates other than that are the possible winners so far. In subsequent completions, place these candidates as low as possible, minimizing their score.

In summary,

we described a reduction of the problem of checking whether a candidate is a possible winner to an ILP, and proposed a three-phase computation that limits the number of times the ILP solver is invoked for a set of candidates . We will show experimentally in Section 6.4 that the proposed techniques can be used to compute the set of possible winners in realistic scenarios.

5 The Repeated Selection Model for Poset Generation

In this section we introduce a novel generative model for partially ordered sets, called the Repeated Selection Model, or RSM for short. It includes earlier generative models of partial orders as special cases via a suitable choice of parameters. We regard RSM as being a model of independent interest, and we also use it here as part of our experimental evaluation, described in Section 6. To start, we introduce the Repeated Insertion Model (RIM) that is used for generating total orders in Section 5.1. We then describe our novel RSM model in Section 5.2.

5.1 Preliminaries: The Repeated Insertion Model (RIM)

In this section we represent total orders using rankings, that is, ordered lists of items indexed by position. We will use , , and so on to denote rankings. We will use to refer to an item at position in , and we will use to denote the position of element in . When describing iterative algorithms, for convenience of presentation we will denote by the value of at step .

The Repeated Insertion Model (RIM) is a generative model that defines a probability distribution over rankings due to Doignon et al. 

[Doignon2004]. This distribution, denoted by , is parameterized by a reference ranking and a function , where is the probability of inserting at position . Here, is a matrix where each row corresponds to a valid probability distribution (i.e., the values in a row sum up to one). Algorithm 2 presents the RIM sampling procedure. It starts with an empty ranking , inserts items in the order of , and puts item at position of the currently incomplete with probability . The algorithm terminates after iterations, and outputs , a total order over the items drawn from .

1:  Initialize an empty ranking .
2:  for  do
3:     Select a random position with a probability
4:     Insert into at position
5:  end for
6:  return  
Algorithm 2
Example 1.

generates as follows.

  • Initialize an empty ranking .

  • At step 1, by inserting into with probability .

  • At step 2, by inserting into at position 1 with probability . Note that is put before since .

  • At step 3, by inserting into at position 2 with probability .

The overall probability of sampling is . Note that this particular sequence of steps is the only way to sample from .

The Mallows model [Mallows1957],

, is a special case of RIM. As a popular preference model, it defines a distribution of rankings that is analogous to the Gaussian distribution: the ranking

is at the center, and rankings closer to have higher probabilities. Specifically, the probability of a ranking is given by:

(11)

Here, is the Kendall-tau distance between and : , that is the number of preference pairs that appear in the opposite relative order in and . The expression in the denominator of Equation 11 is the normalization constant, which we will find convenient to denote . When , the probability mass is concentrated around the reference ranking ; when , all rankings have the same probability, that is,

is the uniform distribution over rankings.

As was shown in [Doignon2004], is precisely when . That is, the Mallows model is a special case of RIM, and so RIM can be used as an efficient sampler for Mallows.

5.2 The Repeated Selection Model (RSM)

The Repeated Selection Model (RSM) is a generative model that defines a probability distribution over posets. Intuitively, in this model we iteratively select a random item and randomly choose whether it succeeds each of the remaining items. More formally, an instance of this distribution, denoted , is parameterized by a reference ranking of length , a selection probability function , where is the probability of selecting the item among the remaining items at step , and a preference probability function that determines the probability that the selected item precedes (is preferred to) each of the remaining items. We view as a matrix where each row corresponds to a valid probability distribution (i.e., the values in a row sum up to one) and the rightmost entries in the row are zero.

Algorithm 3 presents the RSM sampling procedure. Intuitively, in contrast to RIM (Algorithm 2) that considers candidates one by one in the order of and inserts them into the output , RSM iteratively selects, and removes, candidates one by one from among the remaining candidates in at each step. Which candidate is selected at step is decided randomly, based on the probability distribution in the row of the selection probability matrix (line 4 of Algorithm 3). Furthermore, to generate posets rather than total orders, RSM uses the preference probability function to decide whether to add a particular preference pair to ; this decision is made independently for all considered pairs (lines 6-8 of Algorithm 3). The probability that a candidate selected at step is preferred to each of the remaining candidates in is .

1:  Initialize an empty poset .
2:  for  do
3:     Select a random position with a probability
4:     Select candidate
5:     Remove from , which now contains candidates
6:     for  do
7:        Add the pair to with probability (or leave it out with probability )
8:     end for
9:  end for
10:  return  the transitive closure of
Algorithm 3
Example 2.

can generate as follows:

  • Initialize an empty poset .

  • At step , select with probability and remove it from , setting . Then, add the pair to with probability , and do not add the pair to with probability .

  • At step , select with probability and remove it from , setting . Finally, add the pair to with probability .

  • Take the transitive closure of and return .

The probability of sampling in this way is .

Note that the same can be generated by using a different sequence of steps, thus yielding a different probability. In our example there is one other way to derive : at step , add to with probability . This happens with the probability