CodeBERT: A Pre-Trained Model for Programming and Natural Languages

02/19/2020 ∙ by Zhangyin Feng, et al. ∙ 0

We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Large pre-trained models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019)

have dramatically improved the state-of-the-art on a variety of natural language processing (NLP) tasks. These pre-trained models learn effective contextual representations from massive unlabeled text optimized by self-supervised objectives, such as masked language modeling, which predicts the original masked word from an artificially masked input sequence. The success of pre-trained models in NLP also drives a surge of multi-modal pre-trained models, such as ViLBERT

(Lu et al., 2019) for language-image and VideoBERT (Sun et al., 2019) for language-video, which are learned from bimodal data such as language-image pairs with bimodal self-supervised objectives.

In this work, we present CodeBERT, a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc. CodeBERT captures the semantic connection between natural language and programming language, and produces general-purpose representations that can broadly support NL-PL understanding tasks (e.g. natural language code search) and generation tasks (e.g. code documentation generation). It is developed with the multi-layer Transformer (Vaswani et al., 2017), which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and large amount of available unimodal codes, we train CodeBERT with a hybrid objective function, including standard masked language modeling (Devlin et al., 2018) and replaced token detection (Clark et al., 2020), where unimodal codes help to learn better generators for producing better alternative tokens for the latter objective.

We train CodeBERT from Github code repositories in 6 programming languages, where bimodal datapoints are codes that pair with function-level natural language documentations (Husain et al., 2019). Training is conducted in a setting similar to that of multilingual BERT (Pires et al., 2019), in which case one pre-trained model is learned for 6 programming languages with no explicit markers used to denote the input programming language. We evaluate CodeBERT on two downstream NL-PL tasks, including natural language code search and code documentation generation. Results show that fine-tuning the parameters of CodeBERT achieves state-of-the-art performance on both tasks. To further investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and test CodeBERT in a zero-shot scenario, i.e. without fine-tuning the parameters of CodeBERT. We find that CodeBERT consistently outperforms RoBERTa, a purely natural language-based pre-trained model.

The contributions of this work are as follows:

  • We propose CodeBERT, which to the best of our knowledge is the first large NL-PL pre-trained model.

  • We present a hybrid learning objective that supports the use of both bimodal data of NL-PL pairs and easily accessed unimodal data, e.g. codes without paired natural language documentation.

  • We demonstrate that CodeBERT achieves state-of-the-art performance on natural language code search and code documentation generation. We further create a dataset to investigate the probing ability of NL-PL pre-trained models.

2 Background

2.1 Pre-Trained Models in NLP

Large pre-trained models (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Raffel et al., 2019)

have brought dramatic empirical improvements on almost every NLP task in the past few years. Successful approaches train deep neural networks on large-scale plain texts with self-supervised learning objectives. One of the most representative neural architectures is Transformer

(Vaswani et al., 2017), which is also the one used in this work. It contains multiple self-attention layers, and can be conventionally learned with gradient decent in an end-to-end manner as every component is differentiable. The terminology “self-supervised” means that supervisions used for pre-training are automatically collected from raw data without manual annotation. Dominant learning objectives are language modeling and its variations. For example, in GPT (Radford et al., 2018), the learning objective is language modeling, namely predicting the next word given the preceding context words {}. As the ultimate goal of pre-training is not to train a good language model, it is desirable to consider both preceding and following contexts to learn better general-purpose contextual representations. This leads us to the masked language modeling objective used in BERT (Devlin et al., 2018), which learns to predict the masked words of a randomly masked word sequence given surrounding contexts. Masked language modeling is also used as one of the two learning objectives for training CodeBERT.

2.2 Multi-Modal Pre-Trained Models

The remarkable success of the pre-trained model in NLP drives the development of multi-modal pre-trained model that learns implicit alignment between inputs of different modalities. These models are typically learned from bimodal data, such as pairs of language-image or pairs of language-video. For example, ViLBERT (Lu et al., 2019) learns from image caption data, where the model learns by reconstructing categories of masked image region or masked words given the observed inputs, and meanwhile predicting whether the caption describes the image content or not. Similarly, VideoBERT (Sun et al., 2019) learns from language-video data and is trained by video and text masked token prediction. Our work belongs to this line of research as we regard NL and PL as different modalities. Our method differs from previous works in that the fuels for model training include not only bimodal data of NL-PL pairs, but larger amounts of unimodal data such as codes without paired documentations.

A concurrent work (Kanade et al., 2019) uses masked language modeling as the objective to train a BERT model merely on codes, by regarding codes as sequences of words. The main difference is that we train CodeBERT from NL-PL data instead of PL only, with the goal of producing generic representations that capture implicit connect between NL and PL. This is more suitable for NL-PL tasks, which is verified empirically in the experiment section.

3 CodeBERT

We describe the details about CodeBERT in this section, including the model architecture, the input and output representations, the objectives and data used for training CodeBERT, and how to fine-tune CodeBERT when it is applied to downstream tasks.

3.1 Model Architecture

We follow BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019), and use multi-layer bidirectional Transformer (Vaswani et al., 2017) as the model architecture of CodeBERT. We will not review the ubiquitous Transformer architecture in detail. We develop CodeBERT by using exactly the same model architecture as RoBERTa-base, which includes 12 layers. Each layer has 12 self-attention heads, and the size of each head is 64. The hidden dimension is 768 and the inner hidden size of the feed-forward layer is 3072. The total number of model parameters is 125M.

3.2 Input/Output Representations

In the pre-training phase, we set the input as the concatenation of two segments with a special separator token, namely . One segment is natural language text, and another is code from a certain programming language.

is a special token in front of two segments, whose final hidden representation is considered as the aggregated sequence representation for classification or ranking. Following the standard way of processing text in Transformer, we regard a natural language text as a sequence of words, and split it as WordPiece

(Wu et al., 2016). We regard a piece of code as a sequence of tokens.

The output of CodeBERT includes (1) contextual vector representation of each token, for both natural language and code, and (2) the representation of

, which works as the aggregated sequence representation.

3.3 Pre-Training Data

We train CodeBERT with both bimodal data, which refers to parallel data of natural language-code pairs, and unimodal data, which stands for codes without paired natural language texts and natural language without paired codes.

We use datapoints from Github repositories, where each bimodal datapoint is an individual function with paired documentation, and each unimodal code is a function without paired documentation. Specifically, we use a recent large dataset provided by Husain et al. (2019), which includes 2.1M bimodal datapoints and 6.4M unimodal codes across six programming languages (Python, Java, JavaScript, PHP, Ruby, and Go). Data statistics is shown in Table 1.111Since we will evaluate on the natural language code search task, we only use the training data of Husain et al. (2019) to train CodeBERT with no access to the dev or testing data.

The data comes from publicly available open-source non-fork GitHub repositories and are filtered with a set of constraints and rules. For example, (1) each project should be used by at least one other project, (2) each documentation is truncated to the first paragraph, (3) documentations shorter than three tokens are removed, (4) functions shorter than three lines are removed, and (5) function names with substring “

test” are removed. An example of the data is given in Figure 1 222The source of the illustrating example comes from https://github.com/apache/spark/blob/618d6bff71073c8c93501ab7392c3cc579730f0b/python/pyspark/rdd.py#L125-L138.

Figure 1: An example of the NL-PL pair, where NL is the first paragraph (filled in red) from the documentation (dashed line in black) of a function.

Training Data bimodal data unimodal Codes
Go 319,256 726,768
Java 500,754 1,569,889
JavaScript 143,252 1,857,835
PHP 662,907 977,821
Python 458,219 1,156,085
Ruby 52,905 164,048
All 2,137,293 6,452,446
Table 1: Statistics of the dataset used for training CodeBERT.
Figure 2: An illustration about the replaced token detection objective. Both NL and code generators are language models, which generate plausible tokens for masked positions based on surrounding contexts. NL-Code discriminator is the targeted pre-trained model, which is trained via detecting plausible alternatives tokens sampled from NL and PL generators. NL-Code discriminator is used for producing general-purpose representations in the fine-tuning step. Both NL and code generators are are thrown out in the fine-tuning step.

3.4 Pre-Training CodeBERT

We describe the two objectives used for training CodeBERT here. The first objective is masked language modeling (MLM), which has proven effective in literature (Devlin et al., 2018; Liu et al., 2019; Sun et al., 2019). We apply masked language modeling on bimodal data of NL-PL pairs. The second objective is replaced token detection (RTD), which further uses a large amount of unimodal data, such as codes without paired natural language texts.

Objective #1: Masked Language Modeling (MLM)

Given a datapoint of NL-PL pair (, }) as input, where is a sequence of NL words and is a sequence of PL tokens, we first select a random set of positions for both NL and PL to mask out (i.e. and , respectively), and then replace the selected positions with a special token. Following Devlin et al. (2018), 15% of the tokens from are masked out.

(1)
(2)
(3)
(4)
(5)

The MLM objective is to predict the original tokens which are masked out, formulated as follows, where is the discriminator which predicts a token from a large vocabulary.

(6)

Objective #2: Replaced Token Detection (RTD)

In the MLM objective, only bimodal data (i.e. datapoints of NL-PL pairs) is used for training. Here we present the objective of replaced token detection. The RTD objective (Clark et al., 2020) is originally developed for efficiently learning pre-trained model for natural language. We adapt it in our scenario, with the advantage of using both bimodal and unimodal data for training. Specifically, there are two data generators here, an NL generator and a PL generator , both for generating plausible alternatives for the set of randomly masked positions.

(7)
(8)
(9)
(10)
(11)

The discriminator is learned to determine whether a word is the original one or not, which is a binary classification problem. It is worth noting that the RTD objective is applied to every position in the input, and it differs from GAN (generative adversarial network) in that if a generator happens to produce the correct token, the label of that token is “real” instead of “fake” (Clark et al., 2020)

. The loss function of RTD with regard to the discriminator parameterized by

is given below, where is an indicator function and

is the discriminator that predicts the probability of the

-th word being real.

(12)
(13)

There are many different ways to implement the generators. In this work, we implement two efficient n-gram language models

(Jurafsky, 2000) with bidirectional contexts, one for NL and one for PL, and learn them from corresponding unimodel datapoints, respectively. The approach is easily generalized to learn bimodal generators or use more complicated generators like Transformer-based neural architecture learned in a joint manner. We leave these to future work. The PL training data is the unimodal codes as shown in Table 1, and the NL training data comes from the documentations from bimodal data. One could easily extend these two training datasets to larger amount. The final loss function are given below.

(14)

We train CodeBERT on one NVIDIA DGX-2 machine using FP16. It combines 16 interconnected NVIDIA Tesla V100 with 32GB memory. We use the following set of hyper-parameters to train models: batchsize is in {256,512} and learning rate is in {1e-4,5e-4}. We use Adam (Kingma and Ba, 2014) to update the parameters and set the number of warmup steps as 10K. We set the batch size as 2,048 and max length as 512. Training 1,000 batches of data costs 600 minutes with MLM objective, 120 minutes with RTD objective. The final model is trained with 25K batches.


model ruby javascript go python java php Ma-Avg
NBow 0.4285 0.4607 0.6409 0.5809 0.5140 0.4835 0.5181
CNN 0.2450 0.3523 0.6274 0.5708 0.5270 0.5294 0.4753
BiRNN 0.0835 0.1530 0.4524 0.3213 0.2865 0.2512 0.2580
selfAtt 0.3651 0.4506 0.6809 0.6922 0.5866 0.6011 0.5628
RoBerta 0.6245 0.6060 0.8204 0.8087 0.6659 0.6576 0.6972
PT w/ Code Only (init=scratch) 0.5712 0.5557 0.7929 0.7855 0.6567 0.6172 0.6632
PT w/ Code Only (init=RoBERTa) 0.6612 0.6402 0.8191 0.8438 0.7213 0.6706 0.7260
CodeBERT (MLM, init=scratch) 0.5695 0.6029 0.8304 0.8261 0.7142 0.6556 0.6998
CodeBERT (MLM, init=RoBERTa) 0.6898 0.6997 0.8383 0.8647 0.7476 0.6893 0.7549
CodeBERT (RTD, init=RoBERTa) 0.6414 0.6512 0.8285 0.8263 0.7150 0.6774 0.7233
CodeBERT (MLM+RTD, init=RoBERTa) 0.6926 0.7059 0.8400 0.8685 0.7484 0.7062 0.7603
Table 2: Results on natural language code retrieval. Baselines include four joint embeddings (first group) of NL and PL, RoBERTa, and RoBERTa which is continuously trained with masked language modeling on codes only (second group). PT stands for pre-training. We train CodeBERT (third group) with different settings, including using different initialization ( from scratch (Init=scratch) or initialized with the parameters of RoBERTa) and using different learning objectives (MLM, RTD, or the combination of both).

3.5 Fine-Tuning CodeBERT

We have different settings to use CodeBERT in downstream NL-PL tasks. For example, in natural language code search, we feed the input as the same way as the pre-training phase and use the representation of

to measure the semantic relevance between code and natural language query, while in code-to-text generation, we use an encoder-decoder framework and initialize the encoder of a generative model with CodeBERT. Details are in the experiment section.

4 Experiment

We present empirical results in this section to verify the effectiveness of CodeBERT. We first describe the use of CodeBERT in natural language code search (§4.1), in a way that model parameters of CodeBERT are fine-tuned. After that, we present the NL-PL probing task (§4.2), and evaluate CodeBERT in a zero-shot setting where the parameters of CodeBERT are clamped. Finally, we evaluate CodeBERT on a generation problem, i.e. code documentation generation (§4.3), and further evaluate on a programming language which is never seen in the training phase (§4.4).

4.1 Natural Language Code Search

Given a natural language as the input, the objective of code search is to find the most semantically related code from a collection of codes. We conduct experiments on the CodeSearchNet corpus (Husain et al., 2019)333https://github.com/github/CodeSearchNet. Data statistics of the training/validation/testing data splits for six programming languages are given in Table 3

. We follow the official evaluation metric to calculate the Mean Reciprocal Rank (MRR) for each pair of test data (

, ) over a fixed set of 999 distractor codes. We further calculate the macro-average MRR for all languages as an overall evaluation metric. It is helpful to note that this metric differs from the avg metric in the original paper, where the answer is retrieved from candidates from all six languages. We fine-tune a language-specific model for each programming language444We have fine-tuned a multi-lingual model for six programming languages, but find that it performs worse that fine-tuning a language-specific model for each programming language.. We train each model with a binary classification loss function, where a layer is connected to the representation of . Both training and validation datasets are created in a way that positive and negative samples are balanced. Negative samples consist of balanced number of instances with randomly replaced NL (i.e. (, )) and PL (i.e. (, )).


Code Search Training Dev Testing
Go 635,635 28,483 14,291
Java 908,886 30,655 26,909
JavaScript 247,773 16,505 6,483
PHP 1,047,406 52,029 28,391
Python 824,342 46,213 22,176
Ruby 97,580 4,417 2,279
Table 3: Data statistics about the CodeSearchNet Corpus for natural language code search.

ruby javascript go python java php all
number of datapoints for probing
PL (2 choices) 38 272 152 1,264 482 407 2,615
NL (4 choices) 20 65 159 216 323 73 856
PL probing
Roberta 73.68% 65.07% 71.05% 59.02% 62.03% 70.02% 62.83%
Pre-Train w/ Code Only 84.21% 81.62% 91.45% 75.16% 85.27% 83.05% 80.00%
CodeBERT (MLM) 81.58% 86.40% 92.11% 79.19% 91.08% 90.42% 84.67%
PL probing with preceding context only
Roberta 71.05% 51.84% 51.32% 55.06% 42.12% 52.58% 51.97%
Pre-Train w/ Code Only 63.16% 49.26% 59.51% 56.96% 59.13% 58.72% 57.05%
CodeBERT (MLM) 60.53% 52.21% 59.51% 61.16% 57.68% 61.92% 59.58%
NL probing
Roberta 45.00% 72.31% 47.17% 67.59% 50.77% 61.64% 56.78%
Pre-Train w/ Code Only 55.00% 61.54% 55.97% 65.74% 53.25% 61.64% 58.29%
CodeBERT (MLM) 60.00% 83.08% 64.15% 72.69% 61.61% 75.34% 67.64%
Table 4: Statistics of the data for NL-PL probing and the performance of different pre-trained models. Accuracies (%) are reported. Best results in each group are in bold.

In the fine-turning step, we set the learning rate as 1e-5, the batch size as 64, the max sequence length as 200 and the max fine-tuning epoch as 8. We use Adam to update the parameters. We choose the model performed best on the development set, and use that to evaluate on the test set.

Model Comparisons

Table 2 shows the results of different approaches on the CodeSearchNet corpus. The first four rows are reported by Husain et al. (2019), which are joint embeddings of NL and PL (Gu et al., 2018; Mitra et al., 2018). NBoW represents neural bag-of-words. CNN, BiRNN and SelfATT stand for 1D convolultional neural network (Kim, 2014)

, bidirectional GRU-based recurrent neural network

(Cho et al., 2014), and multi-head attention (Vaswani et al., 2017), respectively.

We report the remaining numbers in Table 2. We train all these pre-trained models by regarding codes as a sequence of tokens. To compare with Kanade et al. (2019), we also continuously train RoBERTa on codes from CodeSearchNet with masked language modeling. Results show that CodeBERT consistently performs better than RoBERTa and the model pre-trained with code only. CodeBERT (MLM) learned from scratch performs better than RoBERTa. Unsurprisingly, initializing CodeBERT with RoBERTa improves the performance.

We further give a learning curve of different pre-trained models in the fine-tuning process. From Figure 3, we can see that CodeBERT performs better at the early stage, which reflects that CodeBERT provides good initialization for learning downstream tasks.

4.2 NL-PL Probing

In the previous subsection, we show the empirical effectiveness of CodeBERT in a setting that the parameters of CodeBERT are fine-tuned in downstream tasks. In this subsection, we further investigate what type of knowledge is learned in CodeBERT without modifying the parameters.

Figure 3: Learning curve of different pre-trained models in the fine-tuning step. We show results on Python and Java.

Task Formulation and Data Construction

Following the probing experiments in NLP (Petroni et al., 2019; Talmor et al., 2019), we study NL-PL probing here. Since there is no existing work towards this goal, we formulate the problem of NL-PL probing and create the dataset by ourselves. Given an NL-PL pair (, ), the goal of NL-PL probing is to test model’s ability to correctly predict/recover the masked token of interest (either a code token or word token ) among distractors. There are two major types of distractors: one is the whole target vocabulary used for the masked language modeling objective (Petroni et al., 2019), and another one has fewer candidates which are filter or curated based on experts’ understanding about the ability to be tested (Talmor et al., 2019). We follow the second direction and formulate NL-PL probing as a multi-choice question answering task, where the question is cloze-style in which a certain token is replaced by and distractor candidate answers are curated based on our expertise.

Specifically, we evaluate on the NL side and PL side, respectively. To ease the effort of data collection, we collect data automatically from NL-PL pairs in both validation and testing sets of CodeSearchNet, both of which are unseen in the pre-training phase. To evaluate on the NL side, we select NL-PL pairs whose NL documentations include one of the six keywords (max, maximize, min, minimize, less, greater), and group them to four candidates by merging first two keywords and the middle two keywords. The task is to ask pre-trained models to select the correct one instead of three other distractors. That is to say, the input in this setting includes the complete code and a masked NL documentation. The goal is to select the correct answer from four candidates. For the PL side, we select codes containing keywords max and min, and formulate the task as a two-choice answer selection problem. Here, the input includes complete NL documentation and a masked PL code, and the goal is to select the correct answer from two candidates. Since code completion is an important scenario, we would like to test model’s ability in predicting the correct token merely based on preceding PL contexts. Therefore, we add an additional setting for PL side, where the input includes the complete NL documentation and preceding PL codes. Data statistics is given in the top two rows in Table 4.

Model Comparisons

Results are given in Table 4. We report accuracy, namely the number of correctly predicted instances over the number of all instances, for each programming language. Since datasets in different programming languages are extremely unbalanced, we report the accumulated metric with the same way. We use CodeBERT (MLM) here because its output layer naturally fits for probing. Results show that CodeBERT performs better than baselines on almost all languages on both NL and PL probing. The numbers with only preceding contexts are lower than that with bidirectional contexts, which suggests that code completion is challenging. We leave it as a future work.

We further give a case study on PL-NL probing. Figure 4 illustrates the example of a python code555The example comes from https://github.com/peri-source/peri/blob/61beed5deaaf978ab31ed716e8470d86ba639867/peri/comp/psfcalc.py#L994-L1002. We mask NL token and PL token separately, and report the predicted probabilities of RoBERTa and CodeBERT. We can see that RoBERTa fails in both cases, whereas CodeBERT makes the correct prediction in both NL and PL settings.

Figure 4: Case study on python language. Masked tokens in NL (in blue) and PL (in yellow) are separately applied. Predicted probabilities of RoBERTa and CodeBERT are given.

model ruby javascript go python java php overall
seq2seq 6.96 6.88 23.48 13.04 11.42 18.40 13.36
Transformer 7.87 8.14 25.61 13.44 12.57 18.25 14.31
RoBERTa 7.26 5.72 26.09 14.92 13.20 19.90 14.52
pre-train w/ code only 7.39 8.30 26.39 15.05 13.07 20.71 15.15
CodeBERT (RTD) 7.36 8.73 26.02 15.12 12.72 20.25 15.03
CodeBERT (MLM) 7.95 8.51 26.79 15.48 13.59 21.00 15.55
CodeBERT (MLM+RTD) 8.46 9.54 26.66 15.41 14.56 21.32 15.99
Table 5: Results on Code-to-Documentation generation, evaluated on CodeSearchNet with smoothed BLEU-4 score.

4.3 Code Documentation Generation

Although the pre-training objective of CodeBERT does not include generation-based objectives (Lewis et al., 2019), we would like to investigate to what extent does CodeBERT perform on generation tasks. Specifically, we study code-to-NL generation, and report results for the documentation generation task on CodeSearchNet Corpus in six programming languages. We use BLEU-4 score (Papineni et al., 2002) as our evaluation metric. Since the generated documentations are short and higher order n-grams may not overlap, we remedy this problem by using smoothed BLEU score (Lin and Och, 2004).

Baselines and Training Details We compare our model with several baselines, including a RNN-based sequence-to-sequence model with attention mechanism (Sutskever et al., 2014), the Transformer (Vaswani et al., 2017)

, RoBERTa and the model pre-trained on code only. We use Transformer with 6 layers, 768 dimensional hidden states and 12 attention heads as our decoder in all settings. To demonstrate the effectiveness of CodeBERT on code-to-NL generation tasks, we adopt various pre-trained models as encoders and stay hyperparameters consistent. We set the max length of input and inference as 256 and 64, respectively. We use the Adam optimizer to update model parameters. The learning rate and the batch size are 5e-5 and 64, respectively. We tune hyperparameters and perform early stopping on the development set.

Results Table 5 shows the results with different models for the code-to-documentation generation task. As we can see, models pre-trained on programming language outperform RoBERTa, which illustrates that pre-trainning models on programming language could improve code-to-NL generation. Besides, results in the Table show that CodeBERT pre-trained with RTD and MLM objectives brings a gain of 1.5% BLEU score over RoBERTa overall and achieve the state-of-the-art performance on the majority of programming languages. These results show that our pre-trainning objectives (MLM and RTD) are effective for CodeBERT on code-to-NL generation tasks.

4.4 Generalization to Programming Languages NOT in Pre-training

We would like to evaluate CodeBERT on the programming language which is never seen in the pre-training step. To this end, we study the task of generating a natural language summary of a C# code snippet. We conduct experiments on the dataset of CodeNN (Iyer et al., 2016)666https://github.com/sriniiyer/codenn, which consists of 66,015 pairs of questions and answers automatically collected from StackOverflow. The length of target document in this task is about 10 on average. This dataset is challenging since the scale of dataset is orders of magnitude smaller than CodeSearchNet Corpus. To reliably evaluate models, the dataset extends the test set by asking human to provide two additional titles for code snippets from the test set, making a total of three reference titles for each code snippet. We evaluate models using smoothed BLEU-4 score and use the same evaluation scripts as Iyer et al. (2016). Since state-of-the-art methods use RNN as their decoder, we choose a 2-layer GRU (Cho et al., 2014) with an attention mechanism as our decoder for a comparison. We fine-tune models using a grid search with the following set of hyper-parameters: batchsize is in {32, 64} and learning rate is in {2e-5, 5e-5}. We report the number when models achieve best performance on the development set.


Model BLEU
MOSES (Koehn et al., 2007) 11.57
IR 13.66
SUM-NN (Rush et al., 2015) 19.31
2-layer BiLSTM 19.78
Transformer (Vaswani et al., 2017) 19.68
TreeLSTM (Tai et al., 2015) 20.11
CodeNN (Iyer et al., 2016) 20.53
code2seq (Alon et al., 2019) 23.04
RoBERTa 19.81
pre-train w/ code only 20.65
CodeBERT (RTD) 22.14
CodeBERT (MLM) 22.32

CodeBERT (MLM+RTD)
22.36
Table 6: Code-to-NL generation on C# language.

Table 6 shows that our model with MLM and RTD pre-training objectives achieves 22.36 BLEU score and improves by 2.55 points over RoBERTa, which illustrates CodeBERT could generalize better to other programming language which is never seen in the pre-training step. However, our model achieve slightly lower results than code2seq (Alon et al., 2019). The main reason could be that code2seq makes use of compositional paths in its abstract syntax tree (AST) while CodeBERT only takes original code as the input. We have trained a version of CodeBERT by traversing the tree structure of AST following a certain order, but applying that model does not bring improvements on generation tasks. This shows a potential direction to improve CodeBERT by incorporating AST.

5 Conclusion

In this paper, we present CodeBERT, which to the best of our knowledge is the first large bimodal pre-trained model for natural language and programming language. We train CodeBERT on both bimodal and unimodal data, and show that fine-tuning CodeBERT achieves state-of-the-art performance on downstream tasks including natural language code search and code-to-documentation generation. To further investigate the knowledge embodied in pre-trained models, we formulate the task of NL-PL probing and create a dataset for probing. We regard the probing task as a cloze-style answer selection problem, and curate distractors for both NL and PL parts. Results show that, with model parameters fixed, CodeBERT performs better than RoBERTa and a continuously trained model using codes only.

There are many potential directions for further research on this field. First, one could learn better generators with bimodal evidence or more complicated neural architecture to improve the replaced token detection objective. Second, the loss functions of CodeBERT mainly target on NL-PL understanding tasks. Although CodeBERT achieves strong BLEU scores on code-to-documentation generation, the CodeBERT itself could be further improved by generation-related learning objectives. How to successfully incorporate AST into the pre-training step is also an attractive direction. Third, we plan to apply CodeBERT to more NL-PL related tasks, and extend it to more programming languages. Flexible and powerful domain/language adaptation methods are necessary to generalize well.

References

  • U. Alon, S. Brody, O. Levy, and E. Yahav (2019) Code2seq: generating sequences from structured representations of code. International Conferenceon Learning Representations. Cited by: §4.4, Table 6.
  • K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio (2014) Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Cited by: §4.1, §4.4.
  • K. Clark, M. Luong, Q. V. Le, and C. D. Manning (2020) {electra}: pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations, Cited by: §1, §3.4.
  • J. Devlin, M. Chang, K. Lee, and K. Toutanova (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §1, §2.1, §3.1, §3.4, §3.4.
  • X. Gu, H. Zhang, and S. Kim (2018) Deep code search. In 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pp. 933–944. Cited by: §4.1.
  • H. Husain, H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt (2019) CodeSearchNet challenge: evaluating the state of semantic code search. arXiv preprint arXiv:1909.09436. Cited by: §1, §3.3, §4.1, §4.1, footnote 1.
  • S. Iyer, I. Konstas, A. Cheung, and L. Zettlemoyer (2016)

    Summarizing source code using a neural attention model

    .
    In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2073–2083. Cited by: §4.4, Table 6.
  • D. Jurafsky (2000) Speech & language processing. Pearson Education India. Cited by: §3.4.
  • A. Kanade, P. Maniatis, G. Balakrishnan, and K. Shi (2019) Pre-trained contextual embedding of source code. arXiv preprint arXiv:2001.00059. Cited by: §2.2, §4.1.
  • Y. Kim (2014) Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Cited by: §4.1.
  • D. P. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.4.
  • P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, et al. (2007) Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pp. 177–180. Cited by: Table 6.
  • M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer (2019) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Cited by: §4.3.
  • C. Lin and F. J. Och (2004) Orange: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the 20th international conference on Computational Linguistics, pp. 501. Cited by: §4.3.
  • Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §1, §2.1, §3.1, §3.4.
  • J. Lu, D. Batra, D. Parikh, and S. Lee (2019) Vilbert: pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems, pp. 13–23. Cited by: §1, §2.2.
  • B. Mitra, N. Craswell, et al. (2018) An introduction to neural information retrieval. Foundations and Trends® in Information Retrieval 13 (1), pp. 1–126. Cited by: §4.1.
  • K. Papineni, S. Roukos, T. Ward, and W. Zhu (2002) BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Cited by: §4.3.
  • M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer (2018) Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: §1, §2.1.
  • F. Petroni, T. Rocktäschel, P. Lewis, A. Bakhtin, Y. Wu, A. H. Miller, and S. Riedel (2019) Language models as knowledge bases?. arXiv preprint arXiv:1909.01066. Cited by: §4.2.
  • T. Pires, E. Schlinger, and D. Garrette (2019) How multilingual is multilingual bert?. arXiv preprint arXiv:1906.01502. Cited by: §1.
  • A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever (2018) Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf. Cited by: §1, §2.1.
  • C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu (2019)

    Exploring the limits of transfer learning with a unified text-to-text transformer

    .
    arXiv preprint arXiv:1910.10683. Cited by: §2.1.
  • A. M. Rush, S. Chopra, and J. Weston (2015) A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685. Cited by: Table 6.
  • C. Sun, A. Myers, C. Vondrick, K. Murphy, and C. Schmid (2019) Videobert: a joint model for video and language representation learning. arXiv preprint arXiv:1904.01766. Cited by: §1, §2.2, §3.4.
  • I. Sutskever, O. Vinyals, and Q. V. Le (2014) Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112. Cited by: §4.3.
  • K. S. Tai, R. Socher, and C. D. Manning (2015)

    Improved semantic representations from tree-structured long short-term memory networks

    .
    arXiv preprint arXiv:1503.00075. Cited by: Table 6.
  • A. Talmor, Y. Elazar, Y. Goldberg, and J. Berant (2019) OLMpics–on what language model pre-training captures. arXiv preprint arXiv:1912.13283. Cited by: §4.2.
  • A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin (2017) Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008. Cited by: §1, §2.1, §3.1, §4.1, §4.3, Table 6.
  • Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al. (2016)

    Google’s neural machine translation system: bridging the gap between human and machine translation

    .
    arXiv preprint arXiv:1609.08144. Cited by: §3.2.
  • Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Cited by: §1, §2.1.