Code Generation Tools (Almost) for Free? A Study of Few-Shot, Pre-Trained Language Models on Code

06/02/2022
by   Patrick Bareiß, et al.
0

Few-shot learning with large-scale, pre-trained language models is a powerful way to answer questions about code, e.g., how to complete a given code example, or even generate code snippets from scratch. The success of these models raises the question whether they could serve as a basis for building a wide range code generation tools. Traditionally, such tools are built manually and separately for each task. Instead, few-shot learning may allow to obtain different tools from a single pre-trained language model by simply providing a few examples or a natural language description of the expected tool behavior. This paper studies to what extent a state-of-the-art, pre-trained language model of code, Codex, may serve this purpose. We consider three code manipulation and code generation tasks targeted by a range of traditional tools: (i) code mutation; (ii) test oracle generation from natural language documentation; and (iii) test case generation. For each task, we compare few-shot learning to a manually built tool. Our results show that the model-based tools complement (code mutation), are on par (test oracle generation), or even outperform their respective traditionally built tool (test case generation), while imposing far less effort to develop them. By comparing the effectiveness of different variants of the model-based tools, we provide insights on how to design an appropriate input ("prompt") to the model and what influence the size of the model has. For example, we find that providing a small natural language description of the code generation task is an easy way to improve predictions. Overall, we conclude that few-shot language models are surprisingly effective, yet there is still more work to be done, such as exploring more diverse ways of prompting and tackling even more involved tasks.

READ FULL TEXT
research
10/13/2022

Language Models of Code are Few-Shot Commonsense Learners

We address the general task of structured commonsense reasoning: given a...
research
05/30/2022

Prompting ELECTRA: Few-Shot Learning with Discriminative Pre-Trained Models

Pre-trained masked language models successfully perform few-shot learnin...
research
06/29/2023

Benchmarking Large Language Model Capabilities for Conditional Generation

Pre-trained large language models (PLMs) underlie most new developments ...
research
06/28/2021

What's in a Measurement? Using GPT-3 on SemEval 2021 Task 8 – MeasEval

In the summer of 2020 OpenAI released its GPT-3 autoregressive language ...
research
04/21/2019

Few-shot NLG with Pre-trained Language Model

Natural language generation (NLG) from structured data or knowledge is e...
research
06/27/2022

Leveraging Language for Accelerated Learning of Tool Manipulation

Robust and generalized tool manipulation requires an understanding of th...
research
05/19/2023

Prompting with Pseudo-Code Instructions

Prompting with natural language instructions has recently emerged as a p...

Please sign up or login with your details

Forgot password? Click here to reset