Testing for Reviewer Anchoring in Peer Review: A Randomized Controlled Trial

07/11/2023
by   Ryan Liu, et al.
0

Peer review frequently follows a process where reviewers first provide initial reviews, authors respond to these reviews, then reviewers update their reviews based on the authors' response. There is mixed evidence regarding whether this process is useful, including frequent anecdotal complaints that reviewers insufficiently update their scores. In this study, we aim to investigate whether reviewers anchor to their original scores when updating their reviews, which serves as a potential explanation for the lack of updates in reviewer scores. We design a novel randomized controlled trial to test if reviewers exhibit anchoring. In the experimental condition, participants initially see a flawed version of a paper that is later corrected, while in the control condition, participants only see the correct version. We take various measures to ensure that in the absence of anchoring, reviewers in the experimental group should revise their scores to be identically distributed to the scores from the control group. Furthermore, we construct the reviewed paper to maximize the difference between the flawed and corrected versions, and employ deception to hide the true experiment purpose. Our randomized controlled trial consists of 108 researchers as participants. First, we find that our intervention was successful at creating a difference in perceived paper quality between the flawed and corrected versions: Using a permutation test with the Mann-Whitney U statistic, we find that the experimental group's initial scores are lower than the control group's scores in both the Evaluation category (Vargha-Delaney A=0.64, p=0.0096) and Overall score (A=0.59, p=0.058). Next, we test for anchoring by comparing the experimental group's revised scores with the control group's scores. We find no significant evidence of anchoring in either the Overall (A=0.50, p=0.61) or Evaluation category (A=0.49, p=0.61).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2020

Prior and Prejudice: The Novice Reviewers' Bias against Resubmissions in Conference Peer Review

Modern machine learning and computer science conferences are experiencin...
research
09/12/2013

Inducing Honest Reporting Without Observing Outcomes: An Application to the Peer-Review Process

When eliciting opinions from a group of experts, traditional devices use...
research
11/30/2020

A Large Scale Randomized Controlled Trial on Herding in Peer-Review Discussions

Peer review is the backbone of academia and humans constitute a cornerst...
research
08/16/2019

A Model of a Randomized Experiment with an Application to the PROWESS Clinical Trial

I develop a model of a randomized experiment with a binary intervention ...
research
07/07/2023

What makes a successful rebuttal in computer science conferences? : A perspective on social interaction

With an exponential increase in submissions to top-tier Computer Science...
research
01/29/2023

Syrupy Mouthfeel and Hints of Chocolate – Predicting Coffee Review Scores using Text Based Sentiment

This paper uses textual data contained in certified (q-graded) coffee re...

Please sign up or login with your details

Forgot password? Click here to reset