A Conversational Paradigm for Program Synthesis

by   Erik Nijkamp, et al.

Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI's Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: https://github.com/salesforce/CodeGen.


Natural Language Commanding via Program Synthesis

We present Semantic Interpreter, a natural language-friendly AI system f...

CodeGen2: Lessons for Training LLMs on Programming and Natural Languages

Large language models (LLMs) have demonstrated remarkable abilities in r...

Large Language Models Know Your Contextual Search Intent: A Prompting Framework for Conversational Search

In this paper, we present a prompting framework called LLMCS that levera...

Data-Driven Program Completion

We introduce program splicing, a programming methodology that aims to au...

Satisfiability-Aided Language Models Using Declarative Prompting

Prior work has combined chain-of-thought prompting in large language mod...

Enhancing Network Management Using Code Generated by Large Language Models

Analyzing network topologies and communication graphs plays a crucial ro...

Bot or Human? Detecting ChatGPT Imposters with A Single Question

Large language models like ChatGPT have recently demonstrated impressive...

Please sign up or login with your details

Forgot password? Click here to reset