Protecting Language Generation Models via Invisible Watermarking

02/06/2023
by   Xuandong Zhao, et al.
0

Language generation models have been an increasingly powerful enabler for many applications. Many such models offer free or affordable API access, which makes them potentially vulnerable to model extraction attacks through distillation. To protect intellectual property (IP) and ensure fair use of these models, various techniques such as lexical watermarking and synonym replacement have been proposed. However, these methods can be nullified by obvious countermeasures such as "synonym randomization". To address this issue, we propose GINSEW, a novel method to protect text generation models from being stolen through distillation. The key idea of our method is to inject secret signals into the probability vector of the decoding steps for each target token. We can then detect the secret message by probing a suspect model to tell if it is distilled from the protected one. Experimental results show that GINSEW can effectively identify instances of IP infringement with minimal impact on the generation quality of protected APIs. Our method demonstrates an absolute improvement of 19 to 29 points on mean average precision (mAP) in detecting suspects compared to previous methods against watermark removal attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2022

Distillation-Resistant Watermarking for Model Protection in NLP

How can we protect the intellectual property of trained NLP models? Mode...
research
05/31/2021

A Protection Method of Trained CNN Model with Secret Key from Unauthorized Access

In this paper, we propose a novel method for protecting convolutional ne...
research
09/19/2022

CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks

Previous works have validated that text generation APIs can be stolen th...
research
05/17/2023

Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark

Large language models (LLMs) have demonstrated powerful capabilities in ...
research
11/24/2022

Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models

In recent years, various watermarking methods were suggested to detect c...
research
12/05/2021

Protecting Intellectual Property of Language Generation APIs with Lexical Watermark

Nowadays, due to the breakthrough in natural language generation (NLG), ...
research
09/04/2023

Safe and Robust Watermark Injection with a Single OoD Image

Training a high-performance deep neural network requires large amounts o...

Please sign up or login with your details

Forgot password? Click here to reset