Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation

05/02/2023
by   Jiawei Liu, et al.
0

Program synthesis has been long studied with recent approaches focused on directly using the power of Large Language Models (LLMs) to generate code according to user intent written in natural language. Code evaluation datasets, containing curated synthesis problems with input/output test-cases, are used to measure the performance of various LLMs on code synthesis. However, test-cases in these datasets can be limited in both quantity and quality for fully assessing the functional correctness of the generated code. Such limitation in the existing benchmarks begs the following question: In the era of LLMs, is the code generated really correct? To answer this, we propose EvalPlus – a code synthesis benchmarking framework to rigorously evaluate the functional correctness of LLM-synthesized code. In short, EvalPlus takes in the base evaluation dataset and uses an automatic input generation step to produce and diversify large amounts of new test inputs using both LLM-based and mutation-based input generators to further validate the synthesized code. We extend the popular HUMANEVAL benchmark and build HUMANEVAL+ with 81x additionally generated tests. Our extensive evaluation across 14 popular LLMs demonstrates that HUMANEVAL+ is able to catch significant amounts of previously undetected wrong code synthesized by LLMs, reducing the pass@k by 15.1 average! Moreover, we even found several incorrect ground-truth implementations in HUMANEVAL. Our work not only indicates that prior popular code synthesis evaluation results do not accurately reflect the true performance of LLMs for code synthesis but also opens up a new direction to improve programming benchmarks through automated test input generation.

READ FULL TEXT

page 8

page 10

research
02/16/2023

LEVER: Learning to Verify Language-to-Code Generation with Execution

The advent of large language models trained on code (code LLMs) has led ...
research
06/05/2023

SelfEvolve: A Code Evolution Framework via Large Language Models

Large language models (LLMs) have already revolutionized code generation...
research
08/11/2022

Interactive Code Generation via Test-Driven User-Intent Formalization

Pre-trained large language models (LLMs) such as OpenAI Codex have shown...
research
05/24/2023

ALGO: Synthesizing Algorithmic Programs with Generated Oracle Verifiers

Large language models (LLMs) excel at implementing code from functionali...
research
04/20/2023

Fully Autonomous Programming with Large Language Models

Current approaches to program synthesis with Large Language Models (LLMs...
research
05/22/2023

Can ChatGPT Defend the Truth? Automatic Dialectical Evaluation Elicits LLMs' Deficiencies in Reasoning

We explore testing the reasoning ability of large language models (LLMs)...
research
02/26/2019

Iteratively Composing Statically Verified Traits

Metaprogramming is often used to programmatically generate faster specia...

Please sign up or login with your details

Forgot password? Click here to reset