DeepAI AI Chat
Log In Sign Up

Repository-Level Prompt Generation for Large Language Models of Code

by   Disha Shrivastava, et al.

With the success of large language models (LLMs) of code and their use as code assistants (e.g. Codex used in GitHub Copilot), techniques for introducing domain-specific knowledge in the prompt design process become important. In this work, we propose a framework called Repo-Level Prompt Generator that learns to generate example-specific prompts using prompt proposals. The prompt proposals take context from the entire repository, thereby incorporating both the structure of the repository and the context from other relevant files (e.g. imports, parent class files). Our technique doesn't require any access to the weights of the LLM, making it applicable in cases where we only have black-box access to the LLM. We conduct experiments on the task of single-line code-autocompletion using code repositories taken from Google Code archives. We demonstrate that an oracle constructed from our prompt proposals gives a remarkably high relative improvement of 36 these proposals. Further, we show that when we train a model to predict a prompt proposal, we can achieve significant performance gains over Codex and other baselines. The code for our work can be found at: <>.


page 1

page 2

page 3

page 4


Topical: Learning Repository Embeddings from Source Code using Attention

Machine learning on source code (MLOnCode) promises to transform how sof...

RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation

The task of repository-level code completion is to continue writing the ...

Github Data Exposure and Accessing Blocked Data using the GraphQL Security Design Flaw

This research study was conducted to illustrate how it is easily possibl...

Computer Algebra for Microhydrodynamics

I describe a method for computer algebra that helps with laborious calcu...

Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models

Recently, large language models for code generation have achieved breakt...

CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation

Code generation is a longstanding challenge, aiming to generate a code s...

Generating GitHub Repository Descriptions: A Comparison of Manual and Automated Approaches

Given the vast number of repositories hosted on GitHub, project discover...