CodeGen2: Lessons for Training LLMs on Programming and Natural Languages

05/03/2023
by   Erik Nijkamp, et al.
3

Large language models (LLMs) have demonstrated remarkable abilities in representation learning for program synthesis and understanding tasks. The quality of the learned representations appears to be dictated by the neural scaling laws as a function of the number of model parameters and observations, while imposing upper bounds on the model performance by the amount of available data and compute, which is costly. In this study, we attempt to render the training of LLMs for program synthesis more efficient by unifying four key components: (1) model architectures, (2) learning methods, (3) infill sampling, and, (4) data distributions. Specifically, for the model architecture, we attempt to unify encoder and decoder-based models into a single prefix-LM. For learning methods, (i) causal language modeling, (ii) span corruption, (iii) infilling are unified into a simple learning algorithm. For infill sampling, we explore the claim of a "free lunch" hypothesis. For data distributions, the effect of a mixture distribution of programming and natural languages on model performance is explored. We conduct a comprehensive series of empirical experiments on 1B LLMs, for which failures and successes of this exploration are distilled into four lessons. We will provide a final recipe for training and release CodeGen2 models in size 1B, 3.7B, 7B, and, 16B parameters, along with the training framework as open-source: https://github.com/salesforce/CodeGen2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/25/2022

A Conversational Paradigm for Program Synthesis

Program synthesis strives to generate a computer program as a solution t...
research
03/31/2022

Scaling Up Models and Data with and

Recent neural network-based language models have benefited greatly from ...
research
05/12/2022

AdaVAE: Exploring Adaptive GPT-2s in Variational Auto-Encoders for Language Modeling

Variational Auto-Encoder (VAE) has become the de-facto learning paradigm...
research
04/20/2023

Learning to Program with Natural Language

Large Language Models (LLMs) have shown remarkable performance in variou...
research
05/22/2023

VideoLLM: Modeling Video Sequence with Large Language Models

With the exponential growth of video data, there is an urgent need for a...
research
11/04/2022

Real-Time Target Sound Extraction

We present the first neural network model to achieve real-time and strea...
research
10/20/2022

Multitasking Models are Robust to Structural Failure: A Neural Model for Bilingual Cognitive Reserve

We find a surprising connection between multitask learning and robustnes...

Please sign up or login with your details

Forgot password? Click here to reset