POPQORN: Quantifying Robustness of Recurrent Neural Networks

05/17/2019
by   Ching-Yun Ko, et al.
0

The vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certified lower bounds of the minimum adversarial perturbation. Such methods, however, were devised for feed-forward networks, e.g. multi-layer perceptron or convolutional networks. It remains an open problem to quantify robustness for recurrent networks, especially LSTM and GRU. For such networks, there exist additional challenges in computing the robustness quantification, such as handling the inputs at multiple steps and the interaction between gates and states. In this work, we propose POPQORN (Propagated-output Quantified Robustness for RNNs), a general algorithm to quantify robustness of RNNs, including vanilla RNNs, LSTMs, and GRUs. We demonstrate its effectiveness on different network architectures and show that the robustness quantification on individual steps can lead to new insights.

READ FULL TEXT
research
04/17/2023

RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks

It is well-known that recurrent neural networks (RNNs), although widely ...
research
11/25/2021

Robustness against Adversarial Attacks in Neural Networks using Incremental Dissipativity

Adversarial examples can easily degrade the classification performance i...
research
09/19/2020

Efficient Certification of Spatial Robustness

Recent work has exposed the vulnerability of computer vision models to s...
research
04/30/2018

How Robust are Deep Neural Networks?

Convolutional and Recurrent, deep neural networks have been successful i...
research
07/20/2020

DiffRNN: Differential Verification of Recurrent Neural Networks

Recurrent neural networks (RNNs) such as Long Short Term Memory (LSTM) n...
research
03/01/2021

From Quantifying Vagueness To Pan-niftyism

In this short paper, we will introduce a simple model for quantifying ph...
research
03/09/2023

Efficient Certified Training and Robustness Verification of Neural ODEs

Neural Ordinary Differential Equations (NODEs) are a novel neural archit...

Please sign up or login with your details

Forgot password? Click here to reset