Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments

06/29/2020 ∙ by Steven Jecmen, et al. ∙ Duke University Carnegie Mellon University 41

We consider three important challenges in conference peer review: (i) reviewers maliciously attempting to get assigned to certain papers to provide positive reviews, possibly as part of quid-pro-quo arrangements with the authors; (ii) "torpedo reviewing," where reviewers deliberately attempt to get assigned to certain papers that they dislike in order to reject them; (iii) reviewer de-anonymization on release of the similarities and the reviewer-assignment code. On the conceptual front, we identify connections between these three problems and present a framework that brings all these challenges under a common umbrella. We then present a (randomized) algorithm for reviewer assignment that can optimally solve the reviewer-assignment problem under any given constraints on the probability of assignment for any reviewer-paper pair. We further consider the problem of restricting the joint probability that certain suspect pairs of reviewers are assigned to certain papers, and show that this problem is NP-hard for arbitrary constraints on these joint probabilities but efficiently solvable for a practical special case. Finally, we experimentally evaluate our algorithms on datasets from past conferences, where we observe that they can limit the chance that any malicious reviewer gets assigned to their desired paper to 50 assignments with over 90 achieve this similarity while also preventing reviewers with close associations from being assigned to the same paper.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Peer review, the evaluation of work by others working in the same field as the producer of the work or with similar competencies, is a critical component of scientific research. It is regarded favorably by a significant majority of researchers and is seen as being essential to both improving the quality of published research and validating the legitimacy of research publications [36, 38, 51]. Due to the wide adoption of peer review in the publication process in academia, the peer-review process can be very high-stakes for authors, and the integrity of the process can significantly influence the careers of the authors (especially due to the prominence of a “rich get richer” effect in academia [35]).

However, there are several challenges that arise in peer review relating to the integrity of the review process. In this work, we address three such challenges for peer review in academic conferences where a number of papers need to be assigned to reviewers at the same time.

(1) Untruthful favorable reviews.

In order to achieve a good reviewer assignment, peer review systems must solicit some information about their reviewers’ knowledge and interests. This inherently presents opportunities for manipulation, since reviewers can lie about their interests and expertise. For example, reviewers often are expected to bid on the papers they are interested in reviewing before an assignment algorithm is run to determine the paper assignment. This system can be manipulated [30]: “

A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers and express a disinterest in others. There are reasonable odds that at least two of your friends (out of 3 reviewers) will get your papers, and with 2 adamantly positive reviews, your paper has good odds of acceptance

.” The problem of manipulation is not limited to the bidding system, as practically anything used to determine paper assignments (e.g., self-reported area of expertise, list of papers the reviewer has published) can potentially be manipulated; in more extreme cases, authors have been known to entirely falsify reviewer identities to get a desired reviewer assignment [12, 15]. In some cases, unethical authors may enter into deals with potential reviewers for their paper, where the reviewer agrees to attempt to get assigned to the author’s paper and give it a favorable review in exchange for some outside reward (e.g., as part of a quid-pro-quo arrangement for the reviewer’s own paper in another publication venue). To preserve the integrity of the reviewing process and maintain community trust, the paper assignment algorithm should guarantee the mitigation of these kinds of arrangements.

(2) Torpedo reviewing.

In “torpedo reviewing,” unethical reviewers attempt to get assigned to papers they dislike with the intent of giving them an overly negative review and blocking the paper from publication. This can have wide-reaching consequences [30]: “If a research direction is controversial in the sense that just 2-or-3 out of hundreds of reviewers object to it, those 2 or 3 people can bid for the paper, give it terrible reviews, and prevent publication. Repeated indefinitely, this gives the power to kill off new lines of research to the 2 or 3 most close-minded members of a community, potentially substantially retarding progress for the community as a whole.” One special case of torpedo reviewing has been called “rational cheating,” referring to reviewers negatively reviewing papers that compete with their own authored work [3, 40]. The high-stakes atmosphere of academic publishing can exacerbate this problem [1]: “The cutthroat attitude that pervades the system results in ludicrous rejections for personal reasons—if the reviewer feels that the paper threatens his or her own research or contradicts his or her beliefs, for example.” A paper assignment algorithm should guarantee to authors that their papers are unlikely to have been torpedo-reviewed.

(3) Reviewer de-anonymization in releasing assignment data.

For transparency and research purposes, conferences may wish to release the paper-reviewer similarities and the paper assignment algorithm used after the conference. However, if the assignment algorithm is deterministic, this would allow for authors to fully determine who reviewed their paper, breaking the anonymity of the reviewing process. Even when reviewer and paper names are removed, identities can still be discovered (as in the case of the Netflix Prize dataset [37]). Consequently, a rigorous guarantee of anonymity is needed in order to release the data.


Although these challenges may seem disparate, we address all of them under a common umbrella framework. Our contributions are as follows:

  • Conceptual: We formulate problems concerning the three aforementioned issues in peer review, and propose a framework to address them through the use of randomized paper assignments (Section 3).

  • Theoretical: We design computationally efficient, randomized assignment algorithms that optimally assign reviewers to papers subject to given restrictions on the probability of assigning any particular reviewer-paper pair (Section 4). We further consider the more complex case of preventing suspicious pairs of reviewers from being assigned to the same paper (Section 5). We show that finding the optimal assignment subject to arbitrary constraints on the probabilities of reviewer-reviewer-paper assignments is NP-hard. In the practical special case where the program chairs want to prevent pairs of reviewers within the same subset of some partition of the reviewer set (for example, reviewers at the same academic institution or with the same geographical area of residence) from being assigned to the same paper, we present an algorithm that finds the optimal randomized assignment with this guarantee.

  • Empirical: We test our algorithms on datasets from past conferences and show their practical effectiveness (Section 6). As a representative example, on data reconstructed from ICLR 2018, our algorithms can limit the chance of any reviewer-paper assignment to while achieving of the optimal total similarity. Our algorithms can continue to achieve this similarity while also preventing reviewers with close associations from being assigned to the same paper.

All of the code for our algorithms and our empirical results is freely available online.111https://github.com/theryanl/mitigating_manipulation_via_randomized_reviewer_assignment/

2 Related Literature

Many paper assignment algorithms for conference peer review have been proposed in past work. The widely-used Toronto Paper Matching System (TPMS) [8] computes a similarity score for each reviewer-paper pair based on analysis of the reviewers’ past work and bids, and then aims to maximize the total similarity of the resulting assignment. The framework of “compute similarities and maximize total similarity” (and similar variants) encompasses many paper assignment algorithms, where similarities can be computed in various ways from automated and manual analysis and reviewer bids [7, 33, 19, 47, 14, 32, 27]. We treat the bidding process and computation of similarities as given, and focus primarily on adjusting the optimization problem to address the three aforementioned challenges. Some work has considered other optimization objectives such as fairness [16, 43]. We also consider a similar fairness objective in a variant of our algorithm. On a related front, there are also a number of recent works [18, 39, 50, 13, 41, 45, 31, 42, 48, 25, 44, 46] which deal with various other aspects of peer review.

Much prior work has studied the issue of preventing or mitigating strategic behavior in peer review. This work usually focuses on the incentives reviewers have to give poor reviews to other papers in the hopes of increasing their own paper’s chances of acceptance [52, 2, 29, 24, 21]. Unlike the issues we deal with in this paper, these works consider only reviewers’ incentives to get their own paper accepted and not other possible incentives. We instead consider arbitrary incentives for a reviewer to give an untruthful review, such as a personal dislike for a research area or misincentives brought about by author-reviewer collusion. Instead of aiming to remove reviewer incentives to write untruthful reviews, our work focuses on mitigating the effectiveness of manipulating the reviewer assignment process.

A concurrent work [10] considers a different set of problems in releasing data in peer review while preserving reviewer anonymity. The data to be released here are some function of the scores and the reviewer-assignment, whereas we look to release the similarities and the assignment code. Moreover, the approach and techniques in [10] are markedly different—they consider post-processing the data for release using techniques such as differential privacy, whereas we consider randomizing the assignment for plausible deniability.

Randomized assignments have been used to address the problem of fair division of indivisible goods such as jobs or courses [22, 5], as well as in the context of Stackelberg security games [28]. The paper [50] uses randomization to address the issue of miscalibration in ratings, such as those given to papers in peer review. To the best of our knowledge, the use of randomized reviewer-paper assignments to address the issues of malicious reviewers or reviewer de-anonymization in peer review has not been studied previously. Work on randomized assignments often references the well-known Birkhoff-von Neumann theorem [4, 49] or a generalization in order to demonstrate how to implement a randomized assignment as a lottery over deterministic assignments. The paper [6] proposes a broad generalization of the Birkhoff-von Neumann theorem that we use in our work.

3 Background and Problem Statements

We first define the standard paper assignment problem, followed by our problem setting. In the standard paper assignment setting, we are given a set of reviewers and a set of papers, along with desired reviewer load (that is, the maximum number of papers any reviewer should be assigned) and desired paper load (that is, the exact number of reviewers any paper should be assigned to).222For ease of exposition, we assume that all reviewer and paper loads are equal. In practice, program chairs may want to set different loads for different reviewers or papers; all of our algorithms and guarantees still hold for this case (as does our code). An assignment of papers to reviewers is a bipartite matching between the sets that obeys the load constraints on all reviewers and papers. In addition, we are given a similarity matrix where denotes how good of a match reviewer is for paper . These similarities can be derived from the reviewers’ bids on papers, prior publications, conflicts of interest, etc.

The standard problem of finding a maximum sum-similarity assignment [8, 7, 19, 14, 27]

is then written as an integer linear program. The decision variables

specify the assignment, where if and only if reviewer is assigned to paper . The objective is to maximize subject to the load constraints and . Since the constraint matrix of the linear program (LP) relaxation of this problem is totally unimodular, the solution to the LP relaxation will be integral and so this problem can be solved as an LP. This method of assigning papers has been used by various conferences such as NeurIPS, ICML, ICCV, and SIGKDD (among others) [8, 14], as well as by popular conference management systems EasyChair (easychair.org) and HotCRP (hotcrp.com).

Now, suppose there exists a reviewer who wishes to get assigned to a specific paper for some malicious reason and manipulates their similarities in order to do so. When the assignment algorithm is deterministic, as in previous work [8, 7, 19, 14, 27, 47], a malicious reviewer who knows the algorithm may be able to effectively manipulate it in order to get assigned to the desired paper. To address this issue, we aim to provide a guarantee that regardless of the reviewer bids and similarities, this reviewer-paper pair has only a limited probability of being assigned.

Consider now the challenge of preserving anonymity in releasing conference data. If a conference releases its similarity matrix and its deterministic assignment algorithm, then anyone could reconstruct the full paper assignment. Interestingly, this problem can be solved in the same way as the malicious reviewer problems described above. If the assignment algorithm provides a guarantee that each reviewer-paper pair has only a limited probability of being assigned, then no reviewer’s identity can be discovered with certainty.

With this motivation, we now consider as stochastic and aim to find a randomized assignment

, a probability distribution over deterministic assignments. This naturally leads to the following problem formulation.

Definition 1 (Pairwise-Constrained Problem).

The input to the problem is a similarity matrix and a matrix . The goal is to find a randomized assignment of papers to reviewers (i.e., a distribution of ) that maximizes subject to the constraints .

Since a randomized assignment is a distribution over deterministic assignments, all assignments in the support of the randomized assignment must still obey the load constraints and . The optimization objective is the expected sum-similarity across all paper-reviewer pairs, the natural analogue of the deterministic sum-similarity objective. In practice, the matrix is provided by the program chairs of the conference; all entries can be set to a constant value if the chairs have no special prior information about any particular reviewer-paper pair.

To prevent dishonest reviews of papers, program chairs may want to do more than just control the probability of individual paper-reviewer pairs. For example, suppose that we have three reviewers assigned per paper (a very common arrangement in computer science conferences). We might not be particularly concerned about preventing any single reviewer from being assigned to this paper, since even if that reviewer dishonestly reviews the paper, there are likely two other honest reviewers who can overrule the dishonest one. However, it would be much worse if we have two reviewers dishonestly reviewing the same paper, since they could likely overrule the sole honest reviewer.

A second issue is that there may be dependencies within certain pairs of reviewers that cannot be accurately represented by constraints on individual reviewer-paper pairs. For example, we may have two reviewers and who are close collaborators, each of which we are not individually very concerned about assigning to paper . However, we may believe that in the case where reviewer has entered into a quid-pro-quo deal to dishonestly review paper , reviewer is likely to also be involved in the same deal. Therefore, one may want to strictly limit the probability that both reviewers and are assigned to paper , regardless of the limits on the probability that either reviewer individually is assigned to paper .

With this motivation, we define the following generalization of the Pairwise-Constrained Problem.

Definition 2 (Triplet-Constrained Problem).

The input to the problem is a similarity matrix , a matrix , and a

-dimensional tensor

. The goal is to find a randomized assignment of papers to reviewers that maximizes subject to the constraints and .

The randomized assignments that solve these problems can be used to address all three challenges we identified earlier:

  • Untruthful favorable reviews: By guaranteeing a limit on the probability that any malicious reviewer or any malicious pairs of reviewers can be assigned to the paper they want, we mitigate the effectiveness of any unethical deals between reviewers and authors by capping the probability that such a deal can be upheld. The entries of can be set by the program chairs based on their assessment of the risk of allowing the corresponding reviewer-paper pair; for example, an entry can be set low if the reviewer and author have collaborated in the past. The entries of can be set similarly based on known associations between reviewers.

  • Torpedo reviewing: By limiting the probability that any reviewer or pair of reviewers can be assigned to a paper they wish to torpedo, we make it much more difficult for a small group of reviewers to shut down a new research direction or to take out competing papers.

  • Reviewer de-anonymization in releasing assignment data: To allow for the release of similarities and the assignment algorithm after a conference, all of the entries in can simply be set to some reasonable constant value. Even if reviewer and paper names are fully identified through analysis of the similarities, only the distribution over assignments can be recovered and not the specific assignment that was actually used. This guarantees that for each paper, no reviewer’s identity can be identified with high confidence, since every reviewer has only a limited chance to be assigned to that paper.

In Sections 4 and 5, we consider the Pairwise-Constrained Problem and Triplet-Constrained Problem respectively. We also consider several related problems in the appendices.

  • We extend our results to an objective based on fairness, which we call the stochastic fairness objective, in Appendix A. Following the max-min fairness concept, we aim to maximize the minimum expected similarity assigned to any paper under the randomized assignment: . We present a version of the Pairwise-Constrained Problem using this objective and an algorithm to solve it, as well as experimental results.

  • We address an alternate version of the Pairwise-Constrained Problem in Appendix B which uses the probabilities with which any reviewer may intend to untruthfully review any paper, along with other problems using these probabilities.

4 Randomized Assignment with Reviewer-Paper Constraints

In this section we present our main algorithm to solve the Pairwise-Constrained Problem (Definition 1), thereby addressing the challenges identified earlier. Before delving into the details of the algorithm, the following theorem states our main result.

Theorem 1.

There exists an algorithm which returns an optimal solution to the Pairwise-Constrained Problem in time.

We describe the algorithm, thereby proving this result, in the next two subsections. Our algorithm that realizes this result has two parts. In the first part, we find an optimal “fractional assignment matrix,” which gives the marginal probabilities of individual reviewer-paper assignments. The second part of the algorithm then samples an assignment, respecting the marginal probabilities specified by this fractional assignment.

4.1 Finding the Fractional Assignment

Define a fractional assignment matrix as a matrix that obeys the load constraints for all reviewers and for all papers . Note that any deterministic assignment can be represented by a fractional assignment matrix with all entries in {0, 1}. Any randomized assignment is associated with a fractional assignment matrix where is the marginal probability that reviewer is assigned to paper . Furthermore, randomized assignments associated with the same fractional assignment matrix have the same expected sum-similarity. The paper [6] proves an extension of the Birkhoff-von Neumann theorem which shows that all fractional assignment matrices are implementable, i.e., they are associated with at least one randomized assignment. On the other hand, any probability matrix not obeying the load constraints cannot be implemented by a lottery over deterministic assignments, since all deterministic assignments do obey the constraints. Therefore, finding the optimal randomized assignment is equivalent to solving the following LP, which we call :

(1)
subject to (2)
(3)
(4)
(5)

has variables and constraints. Using techniques from [23], can be solved in time.

4.2 Implementing the Probabilities

only finds the optimal marginal assignment probabilities (where now refers to a solution to ). It remains to show whether and how these marginal probabilities can be implemented as a randomization over deterministic paper assignments. The paper [6] provides a method for sampling a deterministic assignment from a fractional assignment matrix, which completes our algorithm once applied to the optimal solution of 1. Here we propose a simpler version of the sampling algorithm. Pseudocode for the algorithm is presented as Algorithm 1; we describe the algorithm in detail below. In Appendix C, we present a supplementary algorithm to compute the full distribution over deterministic assignments, which [6] does not. Knowing the full distribution may be useful in order to compute other properties of the randomized assignment not calculable from directly.

Input: Fractional assignment matrix , reviewer set , paper set
Ouput: Deterministic assignment matrix
Algorithm:

1:Construct vertex set
2:Construct directed edge set
3:Construct capacity function as
4:Construct initial flow function as
5:while  such that  do
6:     Find a cycle of edges (ignoring direction) such that
7:     
8:     
9:     
10:     for  do
11:         
12:     end for
13:     for  do
14:         
15:     end for
16:     
17:     for  do
18:         
19:     end for
20:     for  do
21:         
22:     end for
23:     
24:     With probability , ; else
25:end while
26:
Algorithm 1 Sampling algorithm for the Pairwise-Constrained Problem.

We begin by constructing a directed graph for our problem, along with a capacity function (Lines 1-3). First, construct one vertex for each reviewer, one vertex for each paper, and source and destination vertices . Add an edge from the source vertex to each reviewer’s vertex with capacity . Add an edge from each paper’s vertex to the destination vertex with capacity . Finally, add an edge from each reviewer to each paper with capacity . We also construct a flow function , which obeys the flow conservation constraints and the capacity constraints (Line 4). A (possibly fractional) assignment can be represented as a flow on this graph, where the flow from reviewer to paper corresponds to the probability reviewer is assigned to paper and the other flows are set uniquely by flow conservation. Due to the load constraints on assignments, the flows on the edges from the papers to the destination must be equal to those edges’ capacities and the flows on the edges from the source to the reviewers must be less than or equal to the capacities.

The algorithm then proceeds in an iterative manner, modifying the flow function on each iteration. On each iteration, we first check if there exists a “fractional edge,” an edge with non-integral flow. If no such edge exists, our current assignment is integral and so we can stop iterating. If there does exist a fractional edge, we then find an arbitrary cycle of fractional edges, ignoring direction (Line 6); this can be done by starting at any fractional edge and walking along fractional edges until a previously-visited vertex is returned to. On finding a cycle, we randomly modify the flow on each of the edges in the cycle in order to guarantee that at least one of the flows becomes integral. In what follows, we first prove that such a cycle of fractional edges can always be found. We then show how to modify the flows in order to guarantee the implementation of the marginal assignment probabilities.

We now show that a directionless cycle of fractional edges must exist whenever one fractional edge exists. Initially, by the properties of , the total flow on each edge going into vertex is integral; further, the algorithm only ever changes the flow on edges with non-integral flow. Therefore, the total flow going into is always integral. By flow conservation, the total flow leaving is also always integral. So, if there is a fractional edge adjacent to , there must also be another fractional edge adjacent to . As already stated, there are no fractional edges adjacent to . Finally, for each vertex , by flow conservation, there can never be only one fractional edge adjacent to . Therefore, every vertex that is adjacent to a fractional edge must also be adjacent to another fractional edge. This proves that a directionless cycle of fractional edges must exist if one fractional edge exists.

We now show how to modify the flow on the edges in this cycle. We can keep pushing flow in some direction on this cycle (pushing negative flow if the edge is directed backwards) until some edge is at capacity or has flow. Call this amount of additional flow , and the resulting flow . We can do the same thing in the other direction on the cycle, calling the additional flow and the resulting flow . Both and must have at least one more integral edge than , since some edge is at capacity. Further, both and obey the flow conservation and capacity constraints. Defining , we set with probability and with probability (Lines 23-24).

Once all edges are integral (after the final iteration), we construct the sampled deterministic assignment from the flow on the reviewer-paper edges (Line 26). Since obeys the capacity constraints on all edges, obeys the load constraints and so is in fact an assignment. Since on each iteration the initial flow satisfies

, the expected final flow on each edge is always equal to the current flow on that edge. Since the expectation of a Bernoulli random variable is exactly the probability it equals one, each final reviewer-paper assignment

has been chosen with the desired marginal probabilities .

Each iteration of this algorithm takes time to find a cycle in the vertices (if a list of fractional edges adjacent to each vertex is maintained), and it can take iterations to terminate since one edge becomes integral every iteration. Therefore, the sampling algorithm is overall .

The time complexity of our full algorithm, including both and the sampling algorithm, is dominated by the complexity of solving the LP. Since standard paper assignment algorithms such as TPMS can be implemented by solving an LP of the same size, our algorithm is comparable in complexity. If a conference currently does solve an LP to find their assignment, whatever LP solver a conference currently uses for their paper assignment algorithm could be used in our algorithm as well.

5 Randomized Assignment with Constraints on Pairs of Reviewers

We now turn to the problem of controlling the probabilities that certain pairs of reviewers are assigned to the same paper, defined in Section 3 as the Triplet-Constrained Problem (Definition 2). In the following subsections, we first show that the problem of finding an optimal randomized assignment given arbitrary constraints on the maximum probabilities of each reviewer-reviewer-paper grouping is NP-hard. We then show that, for the practical special case of restrictions on reviewers from the same subset of a partition of (such as the same primary academic institution or geographical area of residence), an optimal randomized assignment can be found efficiently.

5.1 NP-Hardness of Arbitrary Constraints

As described in Section 3, solving the Triplet-Constrained Problem would allow the program chairs of a conference maximum flexibility in how they control the probabilities of the assignments of pairs of reviewers. Unfortunately, as the following theorem shows, this problem cannot be efficiently solved.

Theorem 2.

The Triplet-Constrained Problem is NP-hard, by reduction from 3-Dimensional Matching.

3-Dimensional Matching is an NP-complete decision problem that takes as input three sets of size as well as a collection of tuples in ; the goal is to find a choice of tuples out of the collection such that no elements of any set are repeated [26]. Our reduction maps sets to and to , and constructs to allow only the assignments where the corresponding tuples are allowable in the 3-Dimensional Matching instance. The full proof is stated in Appendix D.

Theorem 2 implies a more fundamental result about the feasible region of implementable reviewer-reviewer-paper probability tensors, that is, the tensors where entry represents the marginal probability that both reviewers and are assigned to paper under some randomized assignment. We can represent any deterministic assignment by a -dimensional tensor where if and only if both reviewers and are assigned to paper . Just as in the earlier case of fractional assignment matrices, the set of implementable probability tensors is a polytope with deterministic assignment tensors at the vertices (since any implementable probability tensor is a convex combination of deterministic assignment tensors). For fractional reviewer-paper assignment matrices, this polytope was defined by a small number () of linear inequalities, despite the fact that it has a large number of vertices (factorial in and ). However, this is no longer the case for reviewer-reviewer-paper probabilities.

Corollary 1.

The polytope of implementable reviewer-reviewer-paper probabilities is not expressible in a polynomial (in and ) number of linear inequality constraints (assuming ).

The proof of this result is also stated in Appendix D.

5.2 Constraints on Disjoint Reviewer Sets

Since the most general problem of arbitrary constraints on reviewer-reviewer-paper triples is NP-hard, we must restrict ourselves to tractable special cases of interest. One such special case arises when the program chairs of a conference can partition the reviewers in such a way that they wish to prevent any two reviewers within the same subset from being assigned to the same paper. For example, reviewers can be partitioned by their primary academic institution. Since reviewers at the same institution are likely closely associated, program chairs may believe that placing them together as co-reviewers is more risky than would be implied by our concern about either reviewer individually. In this case, there may not even be any concern about the reviewers’ motivations; the concern may simply be that the reviewers’ opinions would not be sufficiently independent. Other partitions of interest could be the reviewer’s geographical area of residence or research sub-field, as each of these defines a “community” of reviewers that may be more closely associated. This special case corresponds to instances of the Triplet-Constrained Problem where if reviewers and are in the same subset, and otherwise.

We formally define this problem as follows:

Definition 3 (Partition-Constrained Problem).

The input to the problem is a similarity matrix , a matrix , and a partition of the reviewer set into subsets . The goal is to find a randomized assignment of papers to reviewers that maximizes subject to the constraints that , and .

For this special case of the Triplet-Constrained Problem, we show that the problem is efficiently solvable, as stated in the following theorem.

Theorem 3.

There exists an algorithm which returns an optimal solution to the Partition-Constrained Problem in poly(n, d) time.

We present the algorithm that realizes this result in the following subsections, thus proving the theorem. The algorithm has two parts: it first finds a fractional assignment matrix meeting certain requirements, and then samples an assignment while respecting the marginal assignment probabilities given by and additionally never assigning two reviewers from the same subset to the same paper. For ease of exposition, we first present the sampling algorithm, and then present an LP which finds the optimal fractional assignment matrix meeting the necessary requirements.

5.2.1 Partition-Constrained Sampling Algorithm

The sampling algorithm we present in this section takes as input a fractional assignment matrix and samples an assignment while respecting the marginal assignment probabilities given by . The sampling algorithm is based on the following lemma:

Lemma 1.

Consider any fractional assignment matrix and any partition of into subsets .

  1. [label=()]

  2. There exists a sampling algorithm that implements the marginal assignment probabilities given by and runs in time such that, for all papers and subsets where , the algorithm never samples an assignment assigning two reviewers from subset to paper .

  3. For any sampling algorithm that implements the marginal assignment probabilities given by , for all papers and subsets where , the expected number of pairs of reviewers from subset assigned to paper is strictly positive.

The sampling algorithm which realizes Lemma 1 has an additional helpful property, which holds simultaneously for all papers and subsets. We state the property in the following corollary and make use of it later:

Corollary 2.

For any fractional assignment matrix , the sampling algorithm that realizes Lemma 1 minimizes the expected number of pairs of reviewers from subset assigned to paper simultaneously for all papers and subsets among all sampling algorithms implementing the marginal assignment probabilities given by .

We present the sampling algorithm that realizes these results here, and prove the guarantees stated in Lemma 1 and Corollary 2 in Appendix E. This algorithm is a modification of the sampling algorithm from Theorem 1 presented earlier as Algorithm 1.

We first provide some high-level intuition about the modifications to Algorithm 1. For any fractional assignment matrix , for any subset and paper , the expected number of reviewers from subset assigned to paper is . This is equal to the initial load from subset on paper in Algorithm 1 (that is, the sum of the flow on all edges from reviewers in subset to paper ). Note that at Algorithm 1’s conclusion, when all edges are integral, the load from subset on paper is equal to the number of reviewers from subset assigned to paper . Therefore, if the fractional assignment is such that the initial expected number of reviewers from subset assigned to paper is no greater than (as stated in part (i) of Lemma 1), we want to keep the load from subset on paper close to its initial value so that the final number of reviewers from subset assigned to paper is also no greater than . With this reasoning, we modify Algorithm 1 so that in each iteration, it ensures that the total load on each paper from each subset is unchanged if originally integral and is never moved past the closest integer in either direction if originally fractional.

1:Construct the set of undirected edges
2:Construct the undirected flow function as
3:Find arbitrary edge such that
4:
5:,
6:while  has not previously been visited do
7:     Visit
8:     if  and  then
9:         Set such that
10:         if  such that and  then
11:              Find such a
12:         else
13:              For some such that , find such that and
14:               (corresponding to )
15:               (corresponding to )
16:         end if
17:     else
18:         Find such that and
19:     end if
20:     
21:     
22:     
23:end while
24:Set as the first edge in leaving
25:Set as the last edge in (entering )
26:Remove edges preceding from , and remove the corresponding elements from and
27:if  and such that and  then
28:     Remove the elements corresponding to and from and
29:end if
30:if  then
31:     Swap and
32:end if
33:Replace each edge in from with the corresponding edge from
Algorithm 2 Loop-finding subroutine (replacing Line 6 in Algorithm 1).

The algorithm realizing Lemma 1 and Corollary 2 is obtained by changing three lines in Algorithm 1, as follows:

  • Line 6 is replaced with the subroutine in Algorithm 2.

  • Line 9 is changed to: .

  • Line 16 is changed to: .

The primary modification we make to Algorithm 1 is replacing Line 6 with the subroutine in Algorithm 2. In each iteration, when we look for an undirected cycle of fractional edges in the graph, we now choose the cycle carefully rather than arbitrarily. We find a cycle by starting from an arbitrary fractional edge in the graph and walk along adjacent fractional edges (ignoring direction) until we repeat a previously-visited vertex. As we do this, whenever we take a fractional edge from a reviewer in subset into paper , there are two cases.

  • Case 1: If there exists a different fractional edge from paper to subset (Line 8 in Algorithm 2), we take this edge next. Note that if the total load from subset on paper is integral, such an edge must exist.

  • Case 2: Otherwise (Line 12 in Algorithm 2), we must take a fractional edge from paper to some other subset . In this case, the total load from subset on paper must not be integral. We choose the subset so that the total load from subset on paper is also not integral. Such a subset must exist since the total load on paper is always integral. We keep track of both the total load from and from on , for every occurrence of this case along the cycle (Lines 14 and 15 in Algorithm 2).

In Case 1, no matter how much flow is pushed on the cycle, the total load from subset on paper will be preserved exactly. However, due to Case 2, we must modify the choice of how much flow to push on the cycle to ensure that the loads are preserved as desired. Specifically, we only push flow in a given direction on the cycle until the total load for either subset or on paper is integral, for any found in Case 2. The total loads from each subset on each paper found in Case 2 are saved in either set or set depending on the direction of the corresponding edges in the cycle, and each subset-paper pair with an edge corresponding to an element of or has only that one edge in the cycle. If the total (fractional) load from subset on paper is , then only additional flow can be added to any edge from subset to paper before the load becomes integral; similarly, only flow can be removed from any edge before the load becomes integral. This leads to the stated changes to Lines 9 and 16 in Algorithm 1.

Therefore, on each iteration, we push flow until either the flow on some edge is integral (as in the original algorithm), or until the total load on some paper from some subset is integral. This implies that the algorithm still terminates in a finite number of iterations. In addition, by the end of the algorithm, the total load on each paper from each subset is preserved exactly if originally integral and rounded in either direction if originally fractional, as desired.

The time complexity of this modified algorithm is identical to that of the original algorithm from Theorem 1, since finding a cycle takes the same amount of time (if a fractional adjacency list for each subset is used) and only a maximum of extra iterations are performed (if an subset’s total load becomes integral rather than an edge’s flow). Therefore, the algorithm is overall .

5.2.2 Finding the Optimal Partition-Constrained Fractional Assignment

Lemma 1 provides necessary and sufficient conditions for the fractional assignment matrices for which it is possible to prevent all pairs of same-subset reviewers from being assigned to the same paper. Therefore, to find an optimal fractional assignment with this property, we just need to add constraints to . We call this new LP :

(6)
subject to Constraints (25) from and
(7)

The solution to when paired with the sampling algorithm from Section 5.2.1 never assigns two reviewers from the same subset to the same paper. Furthermore, since any fractional assignment not obeying Constraint (7) will have a strictly positive probability of assigning two reviewers from the same subset to the same paper, finds the optimal fractional assignment with this guarantee. This completes the algorithm for the Partition-Constrained Problem.

Additionally, Corollary 2 shows that the sampling algorithm from Section 5.2.1 is optimal in the expected number of same-subset reviewer pairs, for any fractional assignment. If the guarantee of entirely preventing same-subset reviewer pairs is not strictly required, Constraint (7) in can be loosened (constraining the subset loads to a higher value) without removing it entirely. For the resulting fractional assignment , the sampling algorithm from Section 5.2.1 still minimizes the expected number of pairs of reviewers from any subset on any paper, as compared to any other sampling algorithm implementing . Since the subset loads are still constrained, the expected number of same-subset reviewer pairs will be lower than in the solution to the Pairwise-Constrained Problem (at the cost of some expected sum-similarity). We examine this tradeoff experimentally in Section 6.

6 Experiments

We test our algorithms on several real-world datasets. The first real-world dataset is a similarity matrix recreated from ICLR 2018 data in [52]; this dataset has reviewers and papers. We also run experiments on similarity matrices created from reviewer bid data for three AI conferences from PrefLib dataset MD-00002 [34], with sizes , , and respectively. For all three PrefLib datasets, we transformed “yes,” “maybe,” and “no response” bids into similarities of , , and respectively, as is often done in practice [42]. As done in [52], we set loads and for all datasets since these are common loads for computer science conferences (except on the PrefLib2 dataset, for which we set for feasibility). All results are averaged over

trials with error bars plotted representing the standard error of the mean, although they are sometimes not visible since the variance is very low.

We first study our algorithm for the Pairwise-Constrained Problem, as described in Section 4. In this setting, program chairs must make a tradeoff between the quality of the output assignments and guarding against malicious reviewers or reviewer de-anonymization by setting the values of the maximum-probability matrix . We investigate this tradeoff on real datasets.

In Figure 0(a), we set all entries of the maximum-probability-matrix equal to the same constant value (varied on the x-axis), and observe how the sum-similarity value of the assignment computed via our algorithm from Section 4 changes as increases from to with an interval of . We report the sum-similarity as a percentage of the unconstrained optimal solution’s objective. This unconstrained optimal solution maximizes sum-similarity through a deterministic assignment as is popularly done today [7, 33, 19, 47, 14, 32, 27], and does not address the aforementioned challenges. We see that our algorithm trades off the maximum probability of an assignment gracefully against the sum-similarity on all datasets. For instance, with , our algorithm achieves of the optimal objective value on the ICLR dataset. In practice, this would allow the program chairs of a conference to limit the chance that any malicious reviewer is assigned to their desired paper to without suffering a significant loss of assignment quality. When is too small, a feasible assignment may not exist in some datasets (e.g., for PrefLib2).

We next test our algorithm for the Partition-Constrained Problem discussed in Section 5.2. In this algorithm, program chairs can navigate an additional tradeoff between the number of same-subset reviewers assigned to the same paper and the assignment quality; we investigate this tradeoff here. On ICLR, we fix and randomly assign reviewers to subsets of size , using this as our partition of (since the dataset does not include any reviewer information). Each subset represents a group of reviewers with close associations, such as reviewers from the same institution. Our algorithm is able to achieve of the optimal objective for the Pairwise-Constrained Problem with while preventing any pairs of reviewers from the same subset from being assigned to the same paper.

Since our algorithm achieves the full possible objective in this setting, we now run experiments with a considerably more restrictive partition constraint. In Figure 0(b), we show an extreme case where we randomly assign reviewers to subsets of equal size (sizes , , and on ICLR and the PrefLib datasets, respectively, with the remainder assigned to a dummy fourth subset), again fixing . We then gradually loosen the constraints on the expected number of same-subset reviewers assigned to the same paper by increasing the constant in Constraint (7) from to in increments of , shown on the x-axis. We plot the sum-similarity objective of the resulting assignment, expressed as a percentage of the optimal non-partition-constrained solution’s objective (i.e., the solution to the Pairwise-Constrained Problem with ). Even in this extremely constrained case with only a few subsets, we still achieve of the non-partition-constrained objective while entirely preventing same-subset reviewer pairs on ICLR.

We run all experiments on a computer with cores and GB of RAM, using Gurobi [20] to solve the LPs. Our algorithm for the Pairwise-Constrained Problem takes an average of seconds to complete on ICLR; our algorithm for the Partition-Constrained Problem takes an average of seconds. As expected, the running time is dominated by the time taken to solve the LP.

Finally, we run additional experiments via synthetic simulations, where we find results qualitatively similar to those presented here. These results are presented in Appendix F. We also run experiments for a fairness objective, which we present in Appendix A.

(a) Pairwise-Constrained Problem
(b) Partition-Constrained Problem with three random subsets
Figure 1: Experimental results on four conference datasets.

7 Discussion

We have presented here a framework and a set of algorithms for addressing three challenges of practical importance to the peer review process: untruthful favorable reviews, torpedo reviewing, and reviewer de-anonymization on the release of assignment data. By design, our algorithms are quite flexible to the needs of the program chairs, depending on which challenges they are most concerned with addressing. Our empirical evaluations demonstrate some of the tradeoffs that can be made between total similarity and maximum probability of each paper-reviewer pair or between total similarity and number of reviewers from the same subset on the same paper. The exact parameters of the algorithm can be set based on how the program chairs weigh the relative importance of each of these factors.

This work leads to a number of open problems of interest. First, since the general Triplet-Constrained Problem is NP-hard, we considered one special structure—the Partition-Constrained Problem—of practical relevance. A direction for future research is to find additional special cases under which optimizing over constraints on the probabilities of reviewer-pair-to-paper assignments is feasible. For example, there may be a known network of reviewers where program chairs wish to prevent connected reviewers from being assigned to the same paper. A second problem of interest is to develop methods to detect bad reviewer-paper pairs before papers are assigned (e.g., based on the bids). Finally, this work does not address the problem of reviewers colluding with each other to give dishonest favorable reviews after being assigned to each others’ papers; we leave this issue for future work.

Acknowledgments

The research of Steven Jecmen and Nihar Shah was supported in part by NSF CAREER 1942124. The research of Steven Jecmen and Fei Fang was supported in part by NSF Award IIS-1850477. The research of Hanrui Zhang and Vincent Conitzer was supported in part by NSF Award IIS-1814056.

References

  • [1] J. Akst (2010) I hate your paper. Many say the peer review system is broken. Here’s how some journals are trying to fix it. The Scientist 24 (8), pp. 36. Cited by: §1.
  • [2] H. Aziz, O. Lev, N. Mattei, J. S. Rosenschein, and T. Walsh (2019) Strategyproof peer selection using randomization, partitioning, and apportionment. Artificial Intelligence 275, pp. 295–309. Cited by: §2.
  • [3] E. F. Barroga (2014) Safeguarding the integrity of science communication by restraining ’rational cheating’ in peer review. Journal of Korean Medical Science 29 (11), pp. 1450–1452. Cited by: §1.
  • [4] G. Birkhoff (1946) Three observations on linear algebra. Univ. Nac. Tacuman, Rev. Ser. A 5, pp. 147–151. Cited by: §2.
  • [5] A. Bogomolnaia and H. Moulin (2001) A new solution to the random assignment problem. Journal of Economic theory 100 (2), pp. 295–328. Cited by: §2.
  • [6] E. Budish, Y. Che, F. Kojima, and P. Milgrom (2009) IMPLEMENTING random assignments : a generalization of the Birkhoff-von Neumann theorem. Cited by: Appendix C, Appendix C, §2, §4.1, §4.2, Lemma 2.
  • [7] L. Charlin, R. S. Zemel, and C. Boutilier (2012) A framework for optimizing paper matching. arXiv preprint arXiv:1202.3706. Cited by: §2, §3, §3, §6.
  • [8] L. Charlin and R. Zemel (2013) The Toronto Paper Matching System: an automated paper-reviewer assignment system. Cited by: §2, §3, §3.
  • [9] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein (2009) Introduction to algorithms. MIT press. Note: Chapter 26.3 Cited by: Appendix C.
  • [10] W. Ding, N. B., and W. Wang (2020) On the privacy-utility tradeoff in peer-review data analysis. arXiv. Cited by: §2.
  • [11] E. A. Dinic (1970)

    Algorithm for solution of a problem of maximum flow in networks with power estimation

    .
    In Soviet Math. Doklady, Vol. 11, pp. 1277–1280. Cited by: Appendix C.
  • [12] C. Ferguson, A. Marcus, and I. Oransky (2014) Publishing: the peer-review scam. Nature News 515 (7528), pp. 480. Cited by: §1.
  • [13] T. Fiez, N. Shah, and L. Ratliff (2020) A SUPER* algorithm to optimize paper bidding in peer review. In Conference on Uncertainty in Artificial Intelligence (UAI), Cited by: Appendix F, §2.
  • [14] P. A. Flach, S. Spiegler, B. Golénia, S. Price, J. Guiver, R. Herbrich, T. Graepel, and M. J. Zaki (2010) Novel tools to streamline the conference review process: experiences from SIGKDD’09. ACM SIGKDD Explorations Newsletter 11 (2), pp. 63–67. Cited by: §2, §3, §3, §6.
  • [15] J. Gao and T. Zhou (2017) Retractions: stamp out fake peer review. Nature 546 (7656), pp. 33–33. Cited by: §1.
  • [16] N. Garg, T. Kavitha, A. Kumar, K. Mehlhorn, and J. Mestre (2010-09-01) Assigning papers to referees. Algorithmica 58 (1), pp. 119–136. External Links: ISSN 1432-0541, Document, Link Cited by: Appendix A, §2.
  • [17] N. Garg, T. Kavitha, A. Kumar, K. Mehlhorn, and J. Mestre (2010) Assigning papers to referees. Algorithmica 58 (1), pp. 119–136. Cited by: Appendix A.
  • [18] H. Ge, M. Welling, and Z. Ghahramani (2013 (accessed June 29, 2020)) A Bayesian model for calibrating conference review scores. Note: http://mlg.eng.cam.ac.uk/hong/unpublished/nips-review-model.pdf Cited by: §2.
  • [19] J. Goldsmith and R. H. Sloan (2007) The AI conference paper assignment problem. In Proc. AAAI Workshop on Preference Handling for Artificial Intelligence, Vancouver, pp. 53–57. Cited by: §2, §3, §3, §6.
  • [20] L. Gurobi Optimization (2020) Gurobi optimizer reference manual. External Links: Link Cited by: Appendix F, §6.
  • [21] R. Holzman and H. Moulin (2013) Impartial nominations for a prize. Econometrica 81 (1), pp. 173–196. Cited by: §2.
  • [22] A. Hylland and R. Zeckhauser (1979) The efficient allocation of individuals to positions. Journal of Political economy 87 (2), pp. 293–314. Cited by: §2.
  • [23] S. Jiang, Z. Song, O. Weinstein, and H. Zhang (2020) Faster dynamic matrix inverse for faster LPs. arXiv preprint arXiv:2004.07470. Cited by: §4.1.
  • [24] A. Kahng, Y. Kotturi, C. Kulkarni, D. Kurokawa, and A. D. Procaccia (2018) Ranking wily people who rank each other. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2.
  • [25] D. Kang, W. Ammar, B. Dalvi, M. van Zuylen, S. Kohlmeier, E. H. Hovy, and R. Schwartz (2018) A dataset of peer reviews (PeerRead): collection, insights and NLP applications. arXiv preprint arXiv:1804.09635. Cited by: §2.
  • [26] R. M. Karp (1972) Reducibility among combinatorial problems. In Complexity of computer computations, pp. 85–103. Cited by: Appendix D, §5.1.
  • [27] A. Kobren, B. Saha, and A. McCallum (2019) Paper matching with local fairness constraints. In ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Cited by: §2, §3, §3, §6.
  • [28] D. Korzhyk, V. Conitzer, and R. Parr (2010) Complexity of computing optimal Stackelberg strategies in security resource allocation games. In Twenty-Fourth AAAI Conference on Artificial Intelligence, Cited by: §2.
  • [29] D. Kurokawa, O. Lev, J. Morgenstern, and A. D. Procaccia (2015) Impartial peer review. In Twenty-Fourth International Joint Conference on Artificial Intelligence, Cited by: §2.
  • [30] J. Langford (2008 (accessed June 16, 2020)) Bidding problems. Note: https://hunch.net/?p=407 Cited by: §1, §1.
  • [31] N. Lawrence and C. Cortes (2014 (accessed June 29, 2020)) The NIPS experiment. Note: http://inverseprobability.com/2014/12/16/the-nips-experiment Cited by: §2.
  • [32] J. W. Lian, N. Mattei, R. Noble, and T. Walsh (2018) The conference paper assignment problem: using order weighted averages to assign indivisible goods. In Thirty-Second AAAI Conference on Artificial Intelligence, Cited by: §2, §6.
  • [33] C. Long, R. C. Wong, Y. Peng, and L. Ye (2013) On good and fair paper-reviewer assignment. In 2013 IEEE 13th International Conference on Data Mining, pp. 1145–1150. Cited by: §2, §6.
  • [34] N. Mattei and T. Walsh (2013) PrefLib: a library of preference data http://preflib.org. In Proceedings of the 3rd International Conference on Algorithmic Decision Theory (ADT 2013), Lecture Notes in Artificial Intelligence. Cited by: §6.
  • [35] R. K. Merton (1968) The Matthew effect in science. Science 159, pp. 56–63. Cited by: §1.
  • [36] A. Mulligan, L. Hall, and E. Raphael (2013) Peer review in a changing world: an international study measuring the attitudes of researchers. Journal of the Association for Information Science and Technology 64 (1), pp. 132–161. Cited by: §1.
  • [37] A. Narayanan and V. Shmatikov (2006) How to break anonymity of the Netflix Prize dataset. arXiv preprint arXiv:cs/0610105. Cited by: §1.
  • [38] D. Nicholas, A. Watkinson, H. R. Jamali, E. Herman, C. Tenopir, R. Volentine, S. Allard, and K. Levine (2015) Peer review: still king in the digital age. Learned Publishing 28 (1), pp. 15–21. Cited by: §1.
  • [39] R. Noothigattu, N. B. Shah, and A. D. Procaccia (2018) Loss functions, axioms, and peer review. arXiv preprint arXiv:1808.09057. Cited by: §2.
  • [40] M. Paolucci and F. Grimaldo (2014) Mechanism change in a simulation of peer review: from junk support to elitism. Scientometrics 99 (3), pp. 663–688. Cited by: §1.
  • [41] M. Roos, J. Rothe, and B. Scheuermann (2011) How to calibrate the scores of biased reviewers by quadratic programming. In AAAI Conference on Artificial Intelligence, Cited by: §2.
  • [42] N. B. Shah, B. Tabibian, K. Muandet, I. Guyon, and U. Von Luxburg (2018) Design and analysis of the NIPS 2016 review process.

    The Journal of Machine Learning Research

    19 (1), pp. 1913–1946.
    Cited by: §2, §6.
  • [43] I. Stelmakh, N. B. Shah, and A. Singh (2018) PeerReview4All: fair and accurate reviewer assignment in peer review. arXiv preprint arXiv:1806.06237. Cited by: Appendix A, §2.
  • [44] I. Stelmakh, N. Shah, A. Singh, and H. Daumé III (2020) Prior and prejudice: the bias against resubmissions in conference peer review. arXiv. Cited by: §2.
  • [45] I. Stelmakh, N. Shah, and A. Singh (2019) On testing for biases in peer review. In NeurIPS, Cited by: §2.
  • [46] I. Stelmakh, N. Shah, and A. Singh (2020) Catch me if I can: detecting strategic behaviour in peer assessment. arXiv. Cited by: §2.
  • [47] W. Tang, J. Tang, and C. Tan (2010) Expertise matching via constraint-based optimization. In 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Vol. 1, pp. 34–41. Cited by: §2, §3, §6.
  • [48] A. Tomkins, M. Zhang, and W. D. Heavlin (2017) Reviewer bias in single- versus double-blind peer review. Proceedings of the National Academy of Sciences 114 (48), pp. 12708–12713. External Links: Document, ISSN 0027-8424, Link, https://www.pnas.org/content/114/48/12708.full.pdf Cited by: §2.
  • [49] J. Von Neumann (1953) A certain zero-sum two-person game equivalent to the optimal assignment problem. Contributions to the Theory of Games 2 (0), pp. 5–12. Cited by: §2.
  • [50] J. Wang and N. B. Shah (2019) Your 2 is my 1, your 3 is my 9: handling arbitrary miscalibrations in ratings. In AAMAS, Cited by: §2, §2.
  • [51] M. Ware (2008) Peer review: benefits, perceptions and alternatives. Publishing Research Consortium 4, pp. 1–20. Cited by: §1.
  • [52] Y. Xu, H. Zhao, X. Shi, J. Zhang, and N. B. Shah (2018) On strategyproof conference peer review. arXiv preprint arXiv:1806.06266. Cited by: §2, §6.

Appendix A Stochastic Fairness Objective

An alternate objective to the sum-similarity objective has been studied in past work [43, 16], aiming to improve the fairness of the assignment with respect to the papers. Rather than maximizing the sum-similarity across all papers, this objective maximizes the minimum total similarity assigned to any paper:

subject to

Due to the minimum in the objective, this problem is NP-hard [17]; the paper [43] presents an algorithm to find an approximate solution.

Figure 2: Experimental results for the Fair Pairwise-Constrained Problem.

In our setting of randomized assignments, we consider an analogous fairness objective, which we call the stochastic fairness objective: . The problem involving this objective is defined as follows.

Definition 4 (Fair Pairwise-Constrained Problem).

The input to the problem is a similarity matrix and a matrix . The goal is to find a randomized assignment of papers to reviewers that maximizes subject to the constraints that .

This problem definition is identical to that of the Pairwise-Constrained Problem (Definition 1), with the exception that the objective to maximize is now the stochastic fairness objective rather than the sum-similarity. Note that this objective is not equal to the “expected fairness” (i.e., ), but by Jensen’s inequality it is an upper bound on the expected fairness.

Fortunately, this problem is solvable efficiently, as the following theorem states.

Theorem 4.

There exists an algorithm which returns an optimal solution to the Fair Pairwise-Constrained Problem in poly(n, d) time.

We now present our algorithm for solving the Fair Pairwise-Constrained Problem, thereby proving the theorem. It proceeds in a similar manner as the algorithm for the Pairwise-Constrained Problem presented in Section 4.

The algorithm first finds an optimal fractional assignment matrix, since the stochastic fairness objective depends only on the marginal probabilities in the fractional assignment matrix. The optimal fractional assignment is found by the following LP, which we call :

(8)
subject to (9)
(10)
(11)
(12)
(13)

For any , the optimal value of is always , the stochastic fairness of . For a fixed , the feasible region of in is exactly the space of fractional assignment matrices with stochastic fairness no less than . Therefore, will find an optimal fractional assignment matrix for the stochastic fairness objective.

Once an optimal fractional assignment matrix has been found, it only remains to sample a deterministic assignment from it. This is done with the sampling algorithm described in Section 4.2, just as in the Pairwise-Constrained Problem.

We now present some empirical results for this algorithm on the four conference datasets described in Section 6. We set all entries of equal to the same constant value (varied on the x-axis), and observe how the stochastic fairness objective of the assignment changes as increases from to with an interval of . Since the expectation is inside a minimum in the objective, the objective cannot be estimated without bias by averaging together the stochastic fairness of sampled deterministic assignments. Due to this difficulty, we plot the exact objective of our randomized assignment (i.e., the optimal objective value of ) rather than averaging over multiple samples, and report the objective as a percentage of the unconstrained optimal solution’s objective (that is, the algorithm’s solution when ). As Figure 2 shows, our algorithm finds a randomized assignment achieving of the optimal fairness objective on the ICLR dataset when .

Appendix B Bad-Assignment Probability Problem Variants

An input to both the Pairwise-Constrained Problem (Definition 1) and the Partition-Constrained Problem (Definition 3) is the matrix , where denotes the maximum probability with which reviewer should be assigned to paper . In practice, program chairs can set the values in this matrix based on their own beliefs about each reviewer-paper pair. However, it may be difficult for program chairs to translate their beliefs about the risk of assigning any reviewer-paper pair into appropriate values for . In this appendix, we define alternate versions of these problems that allow the program chairs to codify their beliefs in a different way.

Define the assignment of reviewer to paper as “bad” if reviewer intends to untruthfully review paper (either because they intend to give a dishonest favorable review or because they intend to torpedo-review). Further define a matrix of bad-assignment probabilities, where represents the probability that the assignment of reviewer to paper would be a bad assignment; we assume that the events of each reviewer-paper assignment being bad are all independent of each other. The “true value” of may not be known, but it can be set based on the program chairs’ beliefs about the reviewers and authors or potentially estimated based on some data from prior conferences. The problem variants we present in the following subsections make use of these bad-assignment probabilities.

We first consider the problem of limiting the probabilities of bad reviewer-paper assignments. We then consider the problem of limiting the probabilities that bad pairs of reviewers are assigned to the same paper.

b.1 Handling Bad Reviewer-Paper Assignments

We define an alternate version of the Pairwise-Constrained Problem using the bad-assignment probabilities:

Definition 5 (Bad-Assignment Probability Pairwise-Constrained Problem).

The input to the problem is a similarity matrix , a matrix of bad-assignment probabilities, and a value . The goal is to find a randomized assignment of papers to reviewers that maximizes subject to the constraints that .

is exactly the probability that both (i) reviewer is assigned to paper and (ii) this assignment is bad, so the constraints in the problem limit this at for all and . This version of the Pairwise-Constrained Problem may be useful in practice if program chairs find it easier to set the values of than they would for .

We now show how to solve the Bad-Assignment Probability Pairwise-Constrained Problem, by translating it to the original Pairwise-Constrained Problem. Suppose that we have access to the matrix of marginal assignment probabilities that occur under some randomized assignment. The randomized assignment obeys our constraints if and only if . This observation leads to the following method of solving the Bad-Assignment Probability Pairwise-Constrained Problem:

  • Transform the given instance of the Bad-Assignment Probability Pairwise-Constrained Problem into an instance of the Pairwise-Constrained Problem by constructing a matrix of maximum probabilities where

  • Solve the Pairwise-Constrained Problem using the algorithm from Theorem 1, described in Section 4.

b.2 Handling Bad Pairs of Reviewers

Here, we first present an alternative version of the Partition-Constrained Problem and show how to solve it. We then present a different approach to handling the issue of bad reviewer pairs.

b.2.1 Constraints on Disjoint Reviewer Sets

In the same way as done above for the Pairwise-Constrained Problem, we define an alternate version of the Partition-Constrained Problem:

Definition 6 (Bad-Assignment Probability Partition-Constrained Problem).

The input to the problem is a similarity matrix , a matrix of bad-assignment probabilities, a value , and a partition of the reviewer set into subsets . The goal is to find a randomized assignment of papers to reviewers that maximizes subject to the constraints that and .

Just as for the Bad-Assignment Probability Pairwise-Constrained Problem, we solve this problem by first transforming an instance of this problem into an equivalent instance of the Partition-Constrained Problem, done by constructing a matrix of maximum probabilities where . We then solve this instance using the algorithm in Section 5.2.

b.2.2 Constraints on the Expected Number of Bad Reviewers

The Bad-Assignment Probability Partition-Constrained Problem requires a partition of the reviewer set and prevents pairs of reviewers from being assigned to the same paper if they are in the same subset of this partition. Alternatively, one may want to prevent pairs of reviewers from being assigned to the same paper based on whether indicates that they are both likely to be bad assignments on this paper, rather than based on some partition of the reviewer set. In this way, we now present an alternative approach to handling the issue of bad reviewer pairs, which does not require a partition of the reviewer set. Rather than explicitly constraining the probabilities of certain same-subset reviewer-reviewer-paper triples as in the Bad-Assignment Partition-Constrained Problem, we limit the expected number of bad reviewers on each paper.

The following problem states this goal:

Definition 7 (Bad-Assignment Probability Expectation-Constrained Problem).

The input to the problem is a similarity matrix , a matrix of bad-assignment probabilities, a value , and a value . The goal is to find a randomized assignment of papers to reviewers that maximizes subject to the constraints that and .

We now present the algorithm that optimally solves this problem. The following LP, , finds a fractional assignment with expected number of bad reviewers on each paper no greater than :

(14)
subject to (15)
(16)
(17)
(18)
(19)

Constraints (15-17) define the space of fractional assignment matrices, Constraint (18) ensures that the probability of each bad assignment occurring is limited at , and Constraint (19) ensures that the expected number of bad reviewer-paper assignments for each paper is at most . Therefore, finds the optimal fractional assignment for the Bad-Assignment Probability Expectation-Constrained Problem. This fractional assignment can then be sampled from using the sampling algorithm in Section 4.2.

The above approach to controlling bad reviewer pairs is not directly comparable to the approach taken earlier when solving the Bad-Assignment Probability Partition-Constrained Problem. The Bad-Assignment Probability Expectation-Constrained Problem indirectly restricts pairs of reviewers from being assigned to the same paper based on whether indicates that they are both likely to be bad assignments on that paper, instead of based on a partition of the reviewer set. This could be advantageous if the sets of likely-bad reviewers for each paper (as given by the probabilities in ) are not expressed well by any partition of the reviewer set. However, handling suspicious reviewer pairs through constraining the expected number of bad reviewers per paper is weaker than directly constraining the probabilities of certain reviewer-reviewer-paper triples (as in the Bad-Assignment Probability Partition-Constrained Problem). First, it provides a guarantee only in expectation, and does not guarantee anything about the probabilities of the events we wish to avoid (that is, bad reviewer pairs being assigned to a paper). In addition, we here are assuming that the event of paper and reviewer being a bad assignment is independent of this event for all other reviewer-paper pairs; so, this method cannot address the issue of associations between reviewers, such as their presence at the same academic institution.

Appendix C Decomposition Algorithm for the Pairwise-Constrained Problem

In Section 4, we provided the sampling algorithm that realizes Theorem 1, thus solving the Pairwise-Constrained Problem (Definition 1). We here provide a decomposition algorithm to compute a full distribution over deterministic assignments for a given fractional assignment matrix (which the prior work [6] does not). For simplicity, we assume here that all reviewer loads are met with equality (that is, for all ); the extension to the case when reviewer loads are met with inequality is simple.

We first define certain concepts necessary for the algorithm. We then present a subroutine of the algorithm and prove its correctness. We then present the overall algorithm and prove its correctness. Finally, we analyze the time complexity of the algorithm.

Preliminaries:

We define here three concepts used in the algorithm and its proof.

  • A capacitated matching instance consists of a set of papers , a set of reviewers , and a capacity function . A solution to is a matrix , where for any ,

    and for any ,

    The solution is integral if for all and .

  • For any and , a maximum matching on a set subject to capacities is a set such that and , and is maximized.

  • For any and , a perfect matching on a set subject to capacities is a maximum matching on subject to that additionally satisfies and .

Decomposition subroutine:

The following procedure, a subroutine of the overall algorithm, takes an instance and a solution to that instance as input, and outputs an integral solution to with weight and a fractional solution to with strictly fewer fractional entries than . Moreover, , , , and satisfy .

  1. Let be , and let be . With this, define capacity function as, for any ,

    and for any ,

  2. Find a maximum matching on subject to capacity constraints .

  3. Set as

    Set as

    Set .

We prove the correctness of this subroutine in Lemma 3. Before we do, we restate a result from prior work [6] that we use in the proof, using our own notation.

Lemma 2 ([6, Thm. 1]).

For any and any solution to , there exists some , integral solutions to , and lying on the -dimensional simplex, such that .

Now, the following lemma proves the correctness of the subroutine.

Lemma 3.

The decomposition subroutine finds , , and , such that (i) is an integral solution to , (ii) is a fractional solution to , (iii) has strictly fewer fractional entries than , and (iv) .

Proof.

We first consider (i). The key step is to show that the maximum matching found in step 2 is a perfect matching with respect to , or equivalently, to show there is a perfect matching on with respect to . Consider the capacitated matching instance , and the solution where

is a solution to by the construction of . By Lemma 2, is a convex combination of integral solutions to . For some , let and be such a decomposition of , where each is an integral solution to and is its associated weight. For each , let be the set of pairs where . Since is a solution to