Log In Sign Up

Adversarial Stylometry in the Wild: Transferable Lexical Substitution Attacks on Author Profiling

by   Chris Emmery, et al.

Written language contains stylistic cues that can be exploited to automatically infer a variety of potentially sensitive author information. Adversarial stylometry intends to attack such models by rewriting an author's text. Our research proposes several components to facilitate deployment of these adversarial attacks in the wild, where neither data nor target models are accessible. We introduce a transformer-based extension of a lexical replacement attack, and show it achieves high transferability when trained on a weakly labeled corpus – decreasing target model performance below chance. While not completely inconspicuous, our more successful attacks also prove notably less detectable by humans. Our framework therefore provides a promising direction for future privacy-preserving adversarial attacks.


page 1

page 2

page 3

page 4


Transferability of Adversarial Attacks on Synthetic Speech Detection

Synthetic speech detection is one of the most important research problem...

User-Centered Security in Natural Language Processing

This dissertation proposes a framework of user-centered security in Natu...

Target Model Agnostic Adversarial Attacks with Query Budgets on Language Understanding Models

Despite significant improvements in natural language understanding model...

Defense Against Multi-target Trojan Attacks

Adversarial attacks on deep learning-based models pose a significant thr...

Towards Variable-Length Textual Adversarial Attacks

Adversarial attacks have shown the vulnerability of machine learning mod...

Maximum Mean Discrepancy is Aware of Adversarial Attacks

The maximum mean discrepancy (MMD) test, as a representative two-sample ...

Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense

We proposes a novel algorithm, ANTHRO, that inductively extracts over 60...