Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding

09/07/2020
by   Sahar Abdelnabi, et al.
0

Recent advances in natural language generation have introduced powerful language models with high-quality output text. However, this raises concerns about the potential misuse of such models for malicious purposes. In this paper, we study natural language watermarking as a defense to help better mark and trace the provenance of text. We introduce the Adversarial Watermarking Transformer (AWT) with a jointly trained encoder-decoder and adversarial training that, given an input text and a binary message, generates an output text that is unobtrusively encoded with the given message. We further study different training and inference strategies to achieve minimal changes to the semantics and correctness of the input text. AWT is the first end-to-end model to hide data in text by automatically learning – without ground truth – word substitutions along with their locations in order to encode the message. We show that our model is effective in largely preserving text utility and decoding the watermark while hiding its presence against adversaries. Additionally, we demonstrate that our method is robust against a range of local changes and denoising attacks.

READ FULL TEXT

page 10

page 16

page 17

research
02/24/2020

GRET: Global Representation Enhanced Transformer

Transformer, based on the encoder-decoder framework, has achieved state-...
research
08/23/2019

Neural data-to-text generation: A comparison between pipeline and end-to-end architectures

Traditionally, most data-to-text applications have been designed using a...
research
11/16/2021

Improving the robustness and accuracy of biomedical language models through adversarial training

Deep transformer neural network models have improved the predictive accu...
research
07/04/2018

Seq2RDF: An end-to-end application for deriving Triples from Natural Language Text

We present an end-to-end approach that takes unstructured textual input ...
research
07/26/2018

HiDDeN: Hiding Data With Deep Networks

Recent work has shown that deep neural networks are highly sensitive to ...
research
07/01/2021

CLINE: Contrastive Learning with Semantic Negative Examples for Natural Language Understanding

Despite pre-trained language models have proven useful for learning high...
research
12/21/2022

Tracing and Removing Data Errors in Natural Language Generation Datasets

Recent work has identified noisy and misannotated data as a core cause o...

Please sign up or login with your details

Forgot password? Click here to reset