DeepAI AI Chat
Log In Sign Up

Certified Robustness to Programmable Transformations in LSTMs

by   Yuhao Zhang, et al.

Deep neural networks for natural language processing are fragile in the face of adversarial examples–small input perturbations, like synonym substitution or word duplication, which cause a neural network to change its prediction. We present an approach to certifying the robustness of LSTMs (and extensions of LSTMs) and training models that can be efficiently certified. Our approach can certify robustness to intractably large perturbation spaces defined programmatically in a language of string transformations. The key insight of our approach is an application of abstract interpretation that exploits recursive LSTM structure to incrementally propagate symbolic sets of inputs, compactly representing a large perturbation space. Our evaluation shows that (1) our approach can train models that are more robust to combinations of string transformations than those produced using existing techniques; (2) our approach can show high certification accuracy of the resulting models.


page 1

page 2

page 3

page 4


Robustness to Programmable String Transformations via Augmented Abstract Training

Deep neural networks for natural language processing tasks are vulnerabl...

LSTMs Exploit Linguistic Attributes of Data

While recurrent neural networks have found success in a variety of natur...

RoMA: a Method for Neural Network Robustness Measurement and Assessment

Neural network models have become the leading solution for a large varie...

Using Videos to Evaluate Image Model Robustness

Human visual systems are robust to a wide range of image transformations...

Towards Robustness Against Natural Language Word Substitutions

Robustness against word substitutions has a well-defined and widely acce...

Adaptive Gradient Refinement for Adversarial Perturbation Generation

Deep Neural Networks have achieved remarkable success in computer vision...

Refactoring = Substitution + Rewriting

We present an approach to describing refactorings that abstracts away fr...