Into the Unknown: Assigning Reviewers to Papers with Uncertain Affinities
Peer review cannot work unless qualified and interested reviewers are assigned to each paper. Nearly all automated reviewer assignment approaches estimate real-valued affinity scores for each paper-reviewer pair that act as proxies for the predicted quality of a future review; conferences then assign reviewers to maximize the sum of these values. This procedure does not account for noise in affinity score computation – reviewers can only bid on a small number of papers, and textual similarity models are inherently probabilistic estimators. In this work, we assume paper-reviewer affinity scores are estimated using a probabilistic model. Using these probabilistic estimates, we bound the scores with high probability and maximize the worst-case sum of scores for a reviewer allocation. Although we do not directly recommend any particular method for estimation of probabilistic affinity scores, we demonstrate how to robustly maximize the sum of scores across multiple different models. Our general approach can be used to integrate a large variety of probabilistic paper-reviewer affinity models into reviewer assignment, opening the door to a much more robust peer review process.
READ FULL TEXT