Generating Watermarked Adversarial Texts

10/25/2021
by   Mingjie Li, et al.
0

Adversarial example generation has been a hot spot in recent years because it can cause deep neural networks (DNNs) to misclassify the generated adversarial examples, which reveals the vulnerability of DNNs, motivating us to find good solutions to improve the robustness of DNN models. Due to the extensiveness and high liquidity of natural language over the social networks, various natural language based adversarial attack algorithms have been proposed in the literature. These algorithms generate adversarial text examples with high semantic quality. However, the generated adversarial text examples may be maliciously or illegally used. In order to tackle with this problem, we present a general framework for generating watermarked adversarial text examples. For each word in a given text, a set of candidate words are determined to ensure that all the words in the set can be used to either carry secret bits or facilitate the construction of adversarial example. By applying a word-level adversarial text generation algorithm, the watermarked adversarial text example can be finally generated. Experiments show that the adversarial text examples generated by the proposed method not only successfully fool advanced DNN models, but also carry a watermark that can effectively verify the ownership and trace the source of the adversarial examples. Moreover, the watermark can still survive after attacked with adversarial example generation algorithms, which has shown the applicability and superiority.

READ FULL TEXT
research
10/03/2020

A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples

Generating adversarial examples for natural language is hard, as natural...
research
01/01/2019

A Noise-Sensitivity-Analysis-Based Test Prioritization Technique for Deep Neural Networks

Deep neural networks (DNNs) have been widely used in the fields such as ...
research
08/23/2021

Semantic-Preserving Adversarial Text Attacks

Deep neural networks (DNNs) are known to be vulnerable to adversarial im...
research
01/22/2018

Adversarial Texts with Gradient Methods

Adversarial samples for images have been extensively studied in the lite...
research
12/18/2020

AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text

Adversarial examples are vital to expose the vulnerability of machine le...
research
09/24/2019

A Visual Analytics Framework for Adversarial Text Generation

This paper presents a framework which enables a user to more easily make...
research
04/21/2018

Generating Natural Language Adversarial Examples

Deep neural networks (DNNs) are vulnerable to adversarial examples, pert...

Please sign up or login with your details

Forgot password? Click here to reset