-
Generating Natural Language Adversarial Examples on a Large Scale with Generative Models
Today text classification models have been widely used. However, these c...
read it
-
AdvCodec: Towards A Unified Framework for Adversarial Text Generation
While there has been great interest in generating imperceptible adversar...
read it
-
A Visual Analytics Framework for Adversarial Text Generation
This paper presents a framework which enables a user to more easily make...
read it
-
Controllable and Diverse Text Generation in E-commerce
In E-commerce, a key challenge in text generation is to find a good trad...
read it
-
Controllable Paraphrase Generation with a Syntactic Exemplar
Prior work on controllable text generation usually assumes that the cont...
read it
-
Towards Controllable and Personalized Review Generation
In this paper, we propose a novel model RevGAN that automatically genera...
read it
-
Automated Speech Generation from UN General Assembly Statements: Mapping Risks in AI Generated Texts
Automated text generation has been applied broadly in many domains such ...
read it
CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation
NLP models are shown to suffer from robustness issues, i.e., a model's prediction can be easily changed under small perturbations to the input. In this work, we present a Controlled Adversarial Text Generation (CAT-Gen) model that, given an input text, generates adversarial texts through controllable attributes that are known to be invariant to task labels. For example, in order to attack a model for sentiment classification over product reviews, we can use the product categories as the controllable attribute which would not change the sentiment of the reviews. Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts, compared to many existing adversarial text generation approaches. We further use our generated adversarial examples to improve models through adversarial training, and we demonstrate that our generated attacks are more robust against model re-training and different model architectures.
READ FULL TEXT
Comments
There are no comments yet.