Is GitHub's Copilot as Bad As Humans at Introducing Vulnerabilities in Code?

04/10/2022
by   Owura Asare, et al.
0

Several advances in deep learning have been successfully applied to the software development process. Of recent interest is the use of neural language models to build tools that can assist in writing code. There is a growing body of work to evaluate these tools and their underlying language models. We aim to contribute to this line of research via a comparative empirical analysis of these tools and language models from a security perspective. For the rest of this paper, we use CGT (Code Generation Tool) to refer to language models as well as other tools, such as Copilot, that are built with language models. The aim of this study is to compare the performance of one CGT, Copilot, with the performance of human developers. Specifically, we investigate whether Copilot is just as likely to introduce the same software vulnerabilities as human developers. We will use the Big-Vul dataset proposed by Fan et al. - a dataset of vulnerabilities introduced by human developers. For each entry in the dataset, we will recreate the scenario before the bug was introduced and allow Copilot to generate a completion. The completions are manually inspected by three independent coders in order to be classified as 1. containing the same vulnerability (introduced by the human), 2. containing a fix for the vulnerability or 3. other. The "other" category is used as a catchall for scenarios that are out of scope for this project.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2022

An Exploratory Study on Regression Vulnerabilities

Background: Security regressions are vulnerabilities introduced in a pre...
research
01/12/2023

Study of software developers' experience using the Github Copilot Tool in the software development process

In software development there is a constant pressure to produce code fas...
research
02/08/2023

Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models

Recently, large language models for code generation have achieved breakt...
research
03/07/2023

Vulnerability Mimicking Mutants

With the increasing release of powerful language models trained on large...
research
09/11/2023

FuzzLLM: A Novel and Universal Fuzzing Framework for Proactively Discovering Jailbreak Vulnerabilities in Large Language Models

Jailbreak vulnerabilities in Large Language Models (LLMs), which exploit...
research
08/20/2023

Can Large Language Models Find And Fix Vulnerable Software?

In this study, we evaluated the capability of Large Language Models (LLM...
research
04/06/2023

Whose Text Is It Anyway? Exploring BigCode, Intellectual Property, and Ethics

Intelligent or generative writing tools rely on large language models th...

Please sign up or login with your details

Forgot password? Click here to reset