The (ab)use of Open Source Code to Train Large Language Models

02/27/2023
by   Ali Al-Kaswan, et al.
0

In recent years, Large Language Models (LLMs) have gained significant popularity due to their ability to generate human-like text and their potential applications in various fields, such as Software Engineering. LLMs for Code are commonly trained on large unsanitized corpora of source code scraped from the Internet. The content of these datasets is memorized and emitted by the models, often in a verbatim manner. In this work, we will discuss the security, privacy, and licensing implications of memorization. We argue why the use of copyleft code to train LLMs is a legal and ethical dilemma. Finally, we provide four actionable recommendations to address this issue.

READ FULL TEXT

page 1

page 2

page 3

research
04/19/2023

How Secure is Code Generated by ChatGPT?

In recent years, large language models have been responsible for great a...
research
07/17/2023

Mini-Giants: "Small" Language Models and Open Source Win-Win

ChatGPT is phenomenal. However, it is prohibitively expensive to train a...
research
05/24/2023

Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks

Recent explorations with commercial Large Language Models (LLMs) have sh...
research
01/30/2020

Authorship Attribution of Source Code: A Language-Agnostic Approach and Applicability in Software Engineering

Authorship attribution of source code has been an established research t...
research
05/25/2022

Towards Using Data-Influence Methods to Detect Noisy Samples in Source Code Corpora

Despite the recent trend of developing and applying neural source code m...
research
07/25/2022

A Hazard Analysis Framework for Code Synthesis Large Language Models

Codex, a large language model (LLM) trained on a variety of codebases, e...

Please sign up or login with your details

Forgot password? Click here to reset