I Speak, You Verify: Toward Trustworthy Neural Program Synthesis

09/29/2022
by   Darren Key, et al.
0

We develop an approach for improving the trustworthiness and overall accuracy of program synthesizers based on large language models for source code. Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave. We learn to analyze the agreement between programs and predicates to judge both which program is most likely to be correct, and also judge whether the language model is able to solve the programming problem in the first place. This latter capacity allows favoring high precision over broad recall: fostering trust by only proposing a program when the system is certain that it is correct.

READ FULL TEXT
research
04/11/2023

Teaching Large Language Models to Self-Debug

Large language models (LLMs) have achieved impressive performance on cod...
research
07/11/2016

sk_p: a neural program corrector for MOOCs

We present a novel technique for automatic program correction in MOOCs, ...
research
05/25/2017

Data-Driven Program Completion

We introduce program splicing, a programming methodology that aims to au...
research
10/23/2018

Ain't Nobody Got Time For Coding: Structure-Aware Program Synthesis From Natural Language

Program synthesis from natural language (NL) is practical for humans and...
research
07/10/2023

Can You Improve My Code? Optimizing Programs with Local Search

This paper introduces a local search method for improving an existing pr...
research
04/25/2022

Natural Language to Code Translation with Execution

Generative models of code, pretrained on large corpora of programs, have...
research
01/26/2023

User-Customizable Transpilation of Scripting Languages

A transpiler converts code from one programming language to another. Man...

Please sign up or login with your details

Forgot password? Click here to reset