What Can We Do to Improve Peer Review in NLP?

10/08/2020
by   Anna Rogers, et al.
8

Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/12/2022

NLPeer: A Unified Resource for the Computational Study of Peer Review

Peer review is a core component of scholarly publishing, yet it is time-...
research
05/02/2022

What Factors Should Paper-Reviewer Assignments Rely On? Community Perspectives on Issues and Ideals in Conference Peer-Review

Both scientific progress and individual researcher careers depend on the...
research
09/14/2021

Just What do You Think You're Doing, Dave?' A Checklist for Responsible Data Use in NLP

A key part of the NLP ethics movement is responsible use of data, but ex...
research
07/17/2022

Review of Advanced Monitoring Mechanisms in Peer-to-Peer (P2P) Botnets

Internet security is getting less secure because of the existing of botn...
research
05/10/2020

Peer Review: Objectivity, Anonymity, Trust

This dissertation is focused on the role of objectivity in peer review. ...
research
04/22/2022

Revise and Resubmit: An Intertextual Model of Text-based Collaboration in Peer Review

Peer review is a key component of the publishing process in most fields ...
research
12/18/2018

Avoiding a Tragedy of the Commons in the Peer Review Process

Peer review is the foundation of scientific publication, and the task of...

Please sign up or login with your details

Forgot password? Click here to reset