I Speak, You Verify: Toward Trustworthy Neural Program Synthesis

09/29/2022
by   Darren Key, et al.
0

We develop an approach for improving the trustworthiness and overall accuracy of program synthesizers based on large language models for source code. Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave. We learn to analyze the agreement between programs and predicates to judge both which program is most likely to be correct, and also judge whether the language model is able to solve the programming problem in the first place. This latter capacity allows favoring high precision over broad recall: fostering trust by only proposing a program when the system is certain that it is correct.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset