ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing

06/01/2023
by   Ryan Liu, et al.
0

Given the rapid ascent of large language models (LLMs), we study the question: (How) can large language models help in reviewing of scientific papers or proposals? We first conduct some pilot studies where we find that (i) GPT-4 outperforms other LLMs (Bard, Vicuna, Koala, Alpaca, LLaMa, Dolly, OpenAssistant, StableLM), and (ii) prompting with a specific question (e.g., to identify errors) outperforms prompting to simply write a review. With these insights, we study the use of LLMs (specifically, GPT-4) for three tasks: 1. Identifying errors: We construct 13 short computer science papers each with a deliberately inserted error, and ask the LLM to check for the correctness of these papers. We observe that the LLM finds errors in 7 of them, spanning both mathematical and conceptual errors. 2. Verifying checklists: We task the LLM to verify 16 closed-ended checklist questions in the respective sections of 15 NeurIPS 2022 papers. We find that across 119 checklist question, paper pairs, the LLM had an 86.6 3. Choosing the "better" paper: We generate 10 pairs of abstracts, deliberately designing each pair in such a way that one abstract was clearly superior than the other. The LLM, however, struggled to discern these relatively straightforward distinctions accurately, committing errors in its evaluations for 6 out of the 10 pairs. Based on these experiments, we think that LLMs have a promising use as reviewing assistants for specific reviewing tasks, but not (yet) for complete evaluations of papers or proposals.

READ FULL TEXT

page 9

page 10

page 12

page 13

page 17

page 19

page 22

page 42

research
05/25/2022

Detecting Label Errors using Pre-Trained Language Models

We show that large pre-trained language models are extremely capable of ...
research
10/11/2022

Can Language Models Be Specific? How?

A good speaker not only needs to be correct, but also has the ability to...
research
08/21/2023

Large Language Models on Wikipedia-Style Survey Generation: an Evaluation in NLP Concepts

Large Language Models (LLMs) have achieved significant success across va...
research
02/24/2022

Capturing Failures of Large Language Models via Human Cognitive Biases

Large language models generate complex, open-ended outputs: instead of o...
research
07/24/2023

Performance of Large Language Models in a Computer Science Degree Program

Large language models such as ChatGPT-3.5 and GPT-4.0 are ubiquitous and...
research
09/10/2019

The Prevalence of Errors in Machine Learning Experiments

Context: Conducting experiments is central to research machine learning ...
research
07/19/2021

Code and Structure Editing for Teaching: A Case Study in using Bibliometrics to Guide Computer Science Research

Structure or projectional editors are a well-studied concept among resea...

Please sign up or login with your details

Forgot password? Click here to reset