Synchromesh: Reliable code generation from pre-trained language models

01/26/2022
by   Gabriel Poesia, et al.
0

Large pre-trained language models have been used to generate code,providing a flexible interface for synthesizing programs from natural language specifications. However, they often violate syntactic and semantic rules of their output language, limiting their practical usability. In this paper, we propose Synchromesh: a framework for substantially improving the reliability of pre-trained models for code generation. Synchromesh comprises two components. First, it retrieves few-shot examples from a training bank using Target Similarity Tuning (TST), a novel method for semantic example selection. TST learns to recognize utterances that describe similar target programs despite differences in surface natural language features. Then, Synchromesh feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding (CSD): a general framework for constraining the output to a set of valid programs in the target language. CSD leverages constraints on partial outputs to sample complete correct programs, and needs neither re-training nor fine-tuning of the language model. We evaluate our methods by synthesizing code from natural language descriptions using GPT-3 and Codex in three real-world languages: SQL queries, Vega-Lite visualizations and SMCalFlow programs. These domains showcase rich constraints that CSD is able to enforce, including syntax, scope, typing rules, and contextual logic. We observe substantial complementary gains from CSD and TST in prediction accuracy and in effectively preventing run-time errors.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2022

Benchmarking Language Models for Code Syntax Understanding

Pre-trained language models have demonstrated impressive performance in ...
research
06/16/2023

Data Selection for Fine-tuning Large Language Models Using Transferred Shapley Values

Although Shapley values have been shown to be highly effective for ident...
research
05/02/2023

From Words to Code: Harnessing Data for Program Synthesis from Natural Language

Creating programs to correctly manipulate data is a difficult task, as t...
research
09/16/2022

Code as Policies: Language Model Programs for Embodied Control

Large language models (LLMs) trained on code completion have been shown ...
research
03/06/2023

xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval

The ability to solve problems is a hallmark of intelligence and has been...
research
05/23/2023

ChipGPT: How far are we from natural language hardware design

As large language models (LLMs) like ChatGPT exhibited unprecedented mac...
research
05/17/2023

LeTI: Learning to Generate from Textual Interactions

Finetuning pre-trained language models (LMs) enhances the models' capabi...

Please sign up or login with your details

Forgot password? Click here to reset