Enriching Source Code with Contextual Data for Code Completion Models: An Empirical Study

04/24/2023
by   Tim van Dam, et al.
0

Transformer-based pre-trained models have recently achieved great results in solving many software engineering tasks including automatic code completion which is a staple in a developer's toolkit. While many have striven to improve the code-understanding abilities of such models, the opposite – making the code easier to understand – has not been properly investigated. In this study, we aim to answer whether making code easier to understand through using contextual data improves the performance of pre-trained code language models for the task of code completion. We consider type annotations and comments as two common forms of additional contextual information that often help developers understand code better. For the experiments, we study code completion in two granularity levels; token and line completion and take three recent and large-scale language models for source code: UniXcoder, CodeGPT, and InCoder with five evaluation metrics. Finally, we perform the Wilcoxon Signed Rank test to gauge significance and measure the effect size. Contrary to our expectations, all models perform better if type annotations are removed (albeit the effect sizes are small). For comments, we find that the models perform better in the presence of multi-line comments (again with small effect sizes). Based on our observations, we recommend making proper design choices when training, fine-tuning, or simply selecting such models given the intended data and application. Better evaluations and multi-modal techniques can also be further investigated to improve the practicality and accuracy of auto-completions.

READ FULL TEXT
research
04/02/2023

Better Language Models of Code through Self-Improvement

Pre-trained language models for code (PLMCs) have gained attention in re...
research
07/22/2021

An Empirical Study on Code Comment Completion

Code comments play a prominent role in program comprehension activities....
research
09/18/2020

Towards Full-line Code Completion with Neural Language Models

A code completion system suggests future code elements to developers giv...
research
12/29/2020

Multi-task Learning based Pre-trained Language Model for Code Completion

Code completion is one of the most useful features in the Integrated Dev...
research
03/10/2023

Model-Agnostic Syntactical Information for Pre-Trained Programming Language Models

Pre-trained Programming Language Models (PPLMs) achieved many recent sta...
research
08/11/2022

CodeBERT-nt: code naturalness via CodeBERT

Much of software-engineering research relies on the naturalness of code,...
research
10/11/2022

Extracting Meaningful Attention on Source Code: An Empirical Study of Developer and Neural Model Code Exploration

The high effectiveness of neural models of code, such as OpenAI Codex an...

Please sign up or login with your details

Forgot password? Click here to reset