GrACE: Generation using Associated Code Edits

05/23/2023
by   Priyanshu Gupta, et al.
0

Developers expend a significant amount of time in editing code for a variety of reasons such as bug fixing or adding new features. Designing effective methods to predict code edits has been an active yet challenging area of research due to the diversity of code edits and the difficulty of capturing the developer intent. In this work, we address these challenges by endowing pre-trained large language models (LLMs) of code with the knowledge of prior, relevant edits. The generative capability of the LLMs helps address the diversity in code changes and conditioning code generation on prior edits helps capture the latent developer intent. We evaluate two well-known LLMs, Codex and CodeT5, in zero-shot and fine-tuning settings respectively. In our experiments with two datasets, the knowledge of prior edits boosts the performance of the LLMs significantly and enables them to generate 29 edited code in top-1 suggestions relative to the current state-of-the-art symbolic and neural approaches, respectively.

READ FULL TEXT
research
05/24/2023

Instruction Tuning with Lexicons for Zero-Shot Style Classification

Style is used to convey authors' intentions and attitudes. Despite the s...
research
05/11/2023

Exploring Zero and Few-shot Techniques for Intent Classification

Conversational NLU providers often need to scale to thousands of intent-...
research
03/13/2021

Multilingual Code-Switching for Zero-Shot Cross-Lingual Intent Prediction and Slot Filling

Predicting user intent and detecting the corresponding slots from text a...
research
06/29/2023

RAPGen: An Approach for Fixing Code Inefficiencies in Zero-Shot

Performance bugs are non-functional bugs that can even manifest in well-...
research
03/02/2023

Human Motion Diffusion as a Generative Prior

In recent months, we witness a leap forward as denoising diffusion model...
research
08/15/2022

Z-BERT-A: a zero-shot Pipeline for Unknown Intent detection

Intent discovery is a fundamental task in NLP, and it is increasingly re...
research
11/21/2022

TEMPERA: Test-Time Prompting via Reinforcement Learning

Careful prompt design is critical to the use of large language models in...

Please sign up or login with your details

Forgot password? Click here to reset