On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot

02/01/2023
by   Antonio Mastropaolo, et al.
0

Software engineering research has always being concerned with the improvement of code completion approaches, which suggest the next tokens a developer will likely type while coding. The release of GitHub Copilot constitutes a big step forward, also because of its unprecedented ability to automatically generate even entire functions from their natural language description. While the usefulness of Copilot is evident, it is still unclear to what extent it is robust. Specifically, we do not know the extent to which semantic-preserving changes in the natural language description provided to the model have an effect on the generated code function. In this paper we present an empirical study in which we aim at understanding whether different but semantically equivalent natural language descriptions result in the same recommended function. A negative answer would pose questions on the robustness of deep learning (DL)-based code generators since it would imply that developers using different wordings to describe the same code would obtain different recommendations. We asked Copilot to automatically generate 892 Java methods starting from their original Javadoc description. Then, we generated different semantically equivalent descriptions for each method both manually and automatically, and we analyzed the extent to which predictions generated by Copilot changed. Our results show that modifying the description results in different code recommendations in  46 semantically equivalent descriptions might impact the correctness of the generated code  28

READ FULL TEXT

page 1

page 3

page 5

page 6

page 7

page 9

research
02/08/2023

Source Code Recommender Systems: The Practitioners' Perspective

The automatic generation of source code is one of the long-lasting dream...
research
04/04/2019

Recommendations for Datasets for Source Code Summarization

Source Code Summarization is the task of writing short, natural language...
research
02/08/2022

Can We Generate Shellcodes via Natural Language? An Empirical Study

Writing software exploits is an important practice for offensive securit...
research
08/20/2023

A Study on Robustness and Reliability of Large Language Model Code Generation

Recently, the large language models (LLMs) have shown extraordinary abil...
research
06/08/2023

Enhancing Robustness of AI Offensive Code Generators via Data Augmentation

In this work, we present a method to add perturbations to the code descr...
research
03/12/2021

An Empirical Study on the Usage of BERT Models for Code Completion

Code completion is one of the main features of modern Integrated Develop...

Please sign up or login with your details

Forgot password? Click here to reset