From Words to Code: Harnessing Data for Program Synthesis from Natural Language

05/02/2023
by   Anirudh Khatry, et al.
0

Creating programs to correctly manipulate data is a difficult task, as the underlying programming languages and APIs can be challenging to learn for many users who are not skilled programmers. Large language models (LLMs) demonstrate remarkable potential for generating code from natural language, but in the data manipulation domain, apart from the natural language (NL) description of the intended task, we also have the dataset on which the task is to be performed, or the "data context". Existing approaches have utilized data context in a limited way by simply adding relevant information from the input data into the prompts sent to the LLM. In this work, we utilize the available input data to execute the candidate programs generated by the LLMs and gather their outputs. We introduce semantic reranking, a technique to rerank the programs generated by LLMs based on three signals coming the program outputs: (a) semantic filtering and well-formedness based score tuning: do programs even generate well-formed outputs, (b) semantic interleaving: how do the outputs from different candidates compare to each other, and (c) output-based score tuning: how do the outputs compare to outputs predicted for the same task. We provide theoretical justification for semantic interleaving. We also introduce temperature mixing, where we combine samples generated by LLMs using both high and low temperatures. We extensively evaluate our approach in three domains, namely databases (SQL), data science (Pandas) and business intelligence (Excel's Power Query M) on a variety of new and existing benchmarks. We observe substantial gains across domains, with improvements of up to 45

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/25/2022

Active Programming by Example with a Natural Language Prior

We introduce APEL, a new framework that enables non-programmers to indir...
research
01/26/2022

Synchromesh: Reliable code generation from pre-trained language models

Large pre-trained language models have been used to generate code,provid...
research
09/21/2023

Reranking for Natural Language Generation from Logical Forms: A Study based on Large Language Models

Large language models (LLMs) have demonstrated impressive capabilities i...
research
05/23/2023

Understanding Programs by Exploiting (Fuzzing) Test Cases

Semantic understanding of programs has attracted great attention in the ...
research
04/20/2023

Learning to Program with Natural Language

Large Language Models (LLMs) have shown remarkable performance in variou...
research
10/26/2022

Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic?

Language models are promising solutions for tackling increasing complex ...
research
08/11/2023

Enhancing Network Management Using Code Generated by Large Language Models

Analyzing network topologies and communication graphs plays a crucial ro...

Please sign up or login with your details

Forgot password? Click here to reset