Robust Textual Embedding against Word-level Adversarial Attacks

02/28/2022
by   Yichen Yang, et al.
0

We attribute the vulnerability of natural language processing models to the fact that similar inputs are converted to dissimilar representations in the embedding space, leading to inconsistent outputs, and propose a novel robust training method, termed Fast Triplet Metric Learning (FTML). Specifically, we argue that the original sample should have similar representation with its adversarial counterparts and distinguish its representation from other samples for better robustness. To this end, we adopt the triplet metric learning into the standard training to pull the words closer to their positive samples (i.e., synonyms) and push away their negative samples (i.e., non-synonyms) in the embedding space. Extensive experiments demonstrate that FTML can significantly promote the model robustness against various advanced adversarial attacks while keeping competitive classification accuracy on original samples. Besides, our method is efficient as it only needs to adjust the embedding and introduces very little overhead on the standard training. Our work shows the great potential of improving the textual robustness through robust word embedding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2023

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

The language models, especially the basic text classification models, ha...
research
09/03/2019

Metric Learning for Adversarial Robustness

Deep networks are well-known to be fragile to adversarial attacks. Using...
research
09/09/2020

Deep Metric Learning Meets Deep Clustering: An Novel Unsupervised Approach for Feature Embedding

Unsupervised Deep Distance Metric Learning (UDML) aims to learn sample s...
research
05/16/2022

Robust Representation via Dynamic Feature Aggregation

Deep convolutional neural network (CNN) based models are vulnerable to t...
research
02/06/2023

Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend

Word-level textual adversarial attacks have achieved striking performanc...
research
09/13/2021

Randomized Substitution and Vote for Textual Adversarial Example Detection

A line of work has shown that natural text processing models are vulnera...
research
01/01/2022

Generating Adversarial Samples For Training Wake-up Word Detection Systems Against Confusing Words

Wake-up word detection models are widely used in real life, but suffer f...

Please sign up or login with your details

Forgot password? Click here to reset