LEVER: Learning to Verify Language-to-Code Generation with Execution

02/16/2023
by   Ansong Ni, et al.
0

The advent of large language models trained on code (code LLMs) has led to significant progress in language-to-code generation. State-of-the-art approaches in this area combine LLM decoding with sample pruning and reranking using test cases or heuristics based on the execution results. However, it is challenging to obtain test cases for many real-world language-to-code applications, and heuristics cannot well capture the semantic features of the execution results, such as data type and value range, which often indicates the correctness of the program. In this work, we propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results. Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results. The sampled programs are reranked by combining the verification score with the LLM generation probability, and marginalizing over programs with the same execution results. On four datasets across the domains of table QA, math QA and basic Python programming, LEVER consistently improves over the base code LLMs(4.6 10.9 of them.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/11/2023

Teaching Large Language Models to Self-Debug

Large language models (LLMs) have achieved impressive performance on cod...
research
04/25/2022

Natural Language to Code Translation with Execution

Generative models of code, pretrained on large corpora of programs, have...
research
05/02/2023

Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation

Program synthesis has been long studied with recent approaches focused o...
research
06/15/2022

FixEval: Execution-based Evaluation of Program Fixes for Competitive Programming Problems

Source code repositories consist of large codebases, often containing er...
research
05/12/2020

Semantic Scaffolds for Pseudocode-to-Code Generation

We propose a method for program generation based on semantic scaffolds, ...
research
01/22/2023

CodeScore: Evaluating Code Generation by Learning Code Execution

A proper code evaluation metric (CEM) profoundly impacts the evolution o...
research
05/24/2023

ALGO: Synthesizing Algorithmic Programs with Generated Oracle Verifiers

Large language models (LLMs) excel at implementing code from functionali...

Please sign up or login with your details

Forgot password? Click here to reset