Evaluating Large Language Models Trained on Code

by   Mark Chen, et al.

We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8 solves 0 from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2 problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.


page 1

page 2

page 3

page 4


StarCoder: may the source be with you!

The BigCode community, an open-scientific collaboration working on the r...

VeriGen: A Large Language Model for Verilog Code Generation

In this study, we explore the capability of Large Language Models (LLMs)...

Security Implications of Large Language Model Code Assistants: A User Study

Advances in Deep Learning have led to the emergence of Large Language Mo...

BioCoder: A Benchmark for Bioinformatics Code Generation with Contextual Pragmatic Knowledge

Pre-trained language models like ChatGPT have significantly improved cod...

A Hazard Analysis Framework for Code Synthesis Large Language Models

Codex, a large language model (LLM) trained on a variety of codebases, e...

An Empirical Cybersecurity Evaluation of GitHub Copilot's Code Contributions

There is burgeoning interest in designing AI-based systems to assist hum...

Pair Programming with Large Language Models for Sampling and Estimation of Copulas

Without writing a single line of code by a human, an example Monte Carlo...

Code Repositories


A Dataset of Python Challenges for AI Research

view repo


Code for the paper "Evaluating Large Language Models Trained on Code"

view repo


Full description can be found here: https://discuss.huggingface.co/t/pretrain-gpt-neo-for-open-source-github-copilot-model/7678?u=ncoop57

view repo

Please sign up or login with your details

Forgot password? Click here to reset