Who Wrote this Code? Watermarking for Code Generation

05/24/2023
by   Taehyun Lee, et al.
0

Large language models for code have recently shown remarkable performance in generating executable code. However, this rapid advancement has been accompanied by many legal and ethical concerns, such as code licensing issues, code plagiarism, and malware generation, making watermarking machine-generated code a very timely problem. Despite such imminent needs, we discover that existing watermarking and machine-generated text detection methods for LLMs fail to function with code generation tasks properly. Hence, in this work, we propose a new watermarking method, SWEET, that significantly improves upon previous approaches when watermarking machine-generated code. Our proposed method selectively applies watermarking to the tokens with high enough entropy, surpassing a defined threshold. The experiments on code generation benchmarks show that our watermarked code has superior quality compared to code produced by the previous state-of-the-art LLM watermarking method. Furthermore, our watermark method also outperforms DetectGPT for the task of machine-generated code detection.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/27/2023

PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback

Large Language Models for Code (Code LLM) are flourishing. New and power...
research
02/19/2023

On the Reliability and Explainability of Automated Code Generation Approaches

Automatic code generation, the task of generating new code snippets from...
research
07/19/2023

Code Detection for Hardware Acceleration Using Large Language Models

Large language models (LLMs) have been massively applied to many tasks, ...
research
04/17/2022

WhyGen: Explaining ML-powered Code Generation by Referring to Training Examples

Deep learning has demonstrated great abilities in various code generatio...
research
12/19/2022

Asking Clarification Questions for Code Generation in General-Purpose Programming Language

Code generation from text requires understanding the user's intent from ...
research
08/03/2023

ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation

In this work, we make the first attempt to evaluate LLMs in a more chall...
research
05/22/2023

The "code” of Ethics:A Holistic Audit of AI Code Generators

AI-powered programming language generation (PLG) models have gained incr...

Please sign up or login with your details

Forgot password? Click here to reset