Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

01/27/2021
by   Chris Emmery, et al.
0

Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information. Adversarial stylometry intends to attack such models by rewriting an author's text. Our research proposes several components to facilitate deployment of these adversarial attacks in the wild, where neither data nor target models are accessible. We introduce a transformer-based extension of a lexical replacement attack, and show it achieves high transferability when trained on a weakly labeled corpus – decreasing target model performance below chance. While not completely inconspicuous, our more successful attacks also prove notably less detectable by humans. Our framework therefore provides a promising direction for future privacy-preserving adversarial attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/16/2022

Transferability of Adversarial Attacks on Synthetic Speech Detection

Synthetic speech detection is one of the most important research problem...
research
01/10/2023

User-Centered Security in Natural Language Processing

This dissertation proposes a framework of user-centered security in Natu...
research
06/13/2021

Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models

Despite significant improvements in natural language understanding model...
research
05/10/2023

Quantization Aware Attack: Enhancing the Transferability of Adversarial Attacks across Target Models with Different Quantization Bitwidths

Quantized Neural Networks (QNNs) receive increasing attention in resourc...
research
03/13/2023

Can Adversarial Examples Be Parsed to Reveal Victim Model Information?

Numerous adversarial attack methods have been developed to generate impe...
research
09/19/2023

Language Guided Adversarial Purification

Adversarial purification using generative models demonstrates strong adv...
research
03/19/2022

Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense

We proposes a novel algorithm, ANTHRO, that inductively extracts over 60...

Please sign up or login with your details

Forgot password? Click here to reset