An Empirical Study on the Usage of BERT Models for Code Completion

03/12/2021
by   Matteo Ciniselli, et al.
0

Code completion is one of the main features of modern Integrated Development Environments (IDEs). Its objective is to speed up code writing by predicting the next code token(s) the developer is likely to write. Research in this area has substantially bolstered the predictive performance of these techniques. However, the support to developers is still limited to the prediction of the next few tokens to type. In this work, we take a step further in this direction by presenting a large-scale empirical study aimed at exploring the capabilities of state-of-the-art deep learning (DL) models in supporting code completion at different granularity levels, including single tokens, one or multiple entire statements, up to entire code blocks (e.g., the iterated block of a for loop). To this aim, we train and test several adapted variants of the recently proposed RoBERTa model, and evaluate its predictions from several perspectives, including: (i) metrics usually adopted when assessing DL generative models (i.e., BLEU score and Levenshtein distance); (ii) the percentage of perfect predictions (i.e., the predicted code snippets that match those written by developers); and (iii) the "semantic" equivalence of the generated code as compared to the one written by developers. The achieved results show that BERT models represent a viable solution for code completion, with perfect predictions ranging from  7 blocks, up to  58 the same code statement.

READ FULL TEXT

page 1

page 7

research
08/03/2021

An Empirical Study on the Usage of Transformer Models for Code Completion

Code completion aims at speeding up code writing by predicting the next ...
research
07/22/2021

An Empirical Study on Code Comment Completion

Code comments play a prominent role in program comprehension activities....
research
04/14/2022

To What Extent do Deep Learning-based Code Recommenders Generate Predictions by Cloning Code from the Training Set?

Deep Learning (DL) models have been widely used to support code completi...
research
01/10/2023

Practitioners' Expectations on Code Completion

Code completion has become a common practice for programmers during thei...
research
03/08/2021

Siri, Write the Next Method

Code completion is one of the killer features of Integrated Development ...
research
02/01/2023

On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot

Software engineering research has always being concerned with the improv...
research
02/14/2023

Generation Probabilities Are Not Enough: Exploring the Effectiveness of Uncertainty Highlighting in AI-Powered Code Completions

Large-scale generative models enabled the development of AI-powered code...

Please sign up or login with your details

Forgot password? Click here to reset