Robust Encodings: A Framework for Combating Adversarial Typos

05/04/2020
by   Erik Jones, et al.
0

Despite excellent performance on many tasks, NLP systems are easily fooled by small adversarial perturbations of inputs. Existing procedures to defend against such perturbations are either (i) heuristic in nature and susceptible to stronger attacks or (ii) provide guaranteed robustness to worst-case attacks, but are incompatible with state-of-the-art models like BERT. In this work, we introduce robust encodings (RobEn): a simple framework that confers guaranteed robustness, without making compromises on model architecture. The core component of RobEn is an encoding function, which maps sentences to a smaller, discrete space of encodings. Systems using these encodings as a bottleneck confer guaranteed robustness with standard training, and the same encodings can be used across multiple tasks. We identify two desiderata to construct robust encoding functions: perturbations of a sentence should map to a small set of encodings (stability), and models using encodings should still perform well (fidelity). We instantiate RobEn to defend against a large family of adversarial typos. Across six tasks from GLUE, our instantiation of RobEn paired with BERT achieves an average robust accuracy of 71.3 adversarial typos in the family considered, while previous work using a typo-corrector achieves only 35.3

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/08/2020

Affine-Invariant Robust Training

The field of adversarial robustness has attracted significant attention ...
research
01/10/2021

BERT Family Eat Word Salad: Experiments with Text Understanding

In this paper, we study the response of large models from the BERT famil...
research
03/22/2023

Revisiting DeepFool: generalization and improvement

Deep neural networks have been known to be vulnerable to adversarial exa...
research
05/28/2021

SafeAMC: Adversarial training for robust modulation recognition models

In communication systems, there are many tasks, like modulation recognit...
research
09/03/2019

Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation

Neural networks are part of many contemporary NLP systems, yet their emp...
research
06/09/2023

Robustness Testing for Multi-Agent Reinforcement Learning: State Perturbations on Critical Agents

Multi-Agent Reinforcement Learning (MARL) has been widely applied in man...
research
12/13/2022

Adversarially Robust Video Perception by Seeing Motion

Despite their excellent performance, state-of-the-art computer vision mo...

Please sign up or login with your details

Forgot password? Click here to reset