Do Not Give Away My Secrets: Uncovering the Privacy Issue of Neural Code Completion Tools

09/14/2023
by   Yizhan Huang, et al.
0

Neural Code Completion Tools (NCCTs) have reshaped the field of software development, which accurately suggest contextually-relevant code snippets benefiting from language modeling techniques. However, language models may emit the training data verbatim during inference with appropriate prompts. This memorization property raises privacy concerns of commercial NCCTs about the hard-coded credential leakage, leading to unauthorized access to systems. Therefore, to answer whether NCCTs will inadvertently emit the hard-coded credential, we propose an evaluation tool called Hard-coded Credential Revealer (HCR). HCR effectively constructs test prompts from GitHub code files with credentials to trigger memorization phenomenon of commercial NCCTs. Then, HCR extracts credentials with pre-defined format from the responses by four designed filters. We apply HCR to evaluate two representative commercial NCCTs: GitHub Copilot and Amazon CodeWhisperer and successfully extracted 2,702 hard-coded credentials from Copilot and 129 secrets from CodeWhisper under the black-box setting, among which at least 3.6 from GitHub repositories. Moreover, two operational credentials were identified. The experimental results raise the severe privacy concern of the potential leakage of hard-coded credentials in the training data of commercial NCCTs.

READ FULL TEXT

page 2

page 5

page 12

research
02/08/2023

Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models

Recently, large language models for code generation have achieved breakt...
research
12/20/2022

CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context

While pre-trained language models (LM) for code have achieved great succ...
research
07/04/2023

ProPILE: Probing Privacy Leakage in Large Language Models

The rapid advancement and widespread use of large language models (LLMs)...
research
02/01/2023

Analyzing Leakage of Personally Identifiable Information in Language Models

Language Models (LMs) have been shown to leak information about training...
research
10/21/2018

Source Code Properties of Defective Infrastructure as Code Scripts

Context: In continuous deployment, software and services are rapidly dep...
research
07/14/2022

Active Data Pattern Extraction Attacks on Generative Language Models

With the wide availability of large pre-trained language model checkpoin...

Please sign up or login with your details

Forgot password? Click here to reset