RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks

04/17/2023
by   Yunruo Zhang, et al.
0

It is well-known that recurrent neural networks (RNNs), although widely used, are vulnerable to adversarial attacks including one-frame attacks and multi-frame attacks. Though a few certified defenses exist to provide guaranteed robustness against one-frame attacks, we prove that defending against multi-frame attacks remains a challenging problem due to their enormous perturbation space. In this paper, we propose the first certified defense against multi-frame attacks for RNNs called RNN-Guard. To address the above challenge, we adopt the perturb-all-frame strategy to construct perturbation spaces consistent with those in multi-frame attacks. However, the perturb-all-frame strategy causes a precision issue in linear relaxations. To address this issue, we introduce a novel abstract domain called InterZono and design tighter relaxations. We prove that InterZono is more precise than Zonotope yet carries the same time complexity. Experimental evaluations across various datasets and model structures show that the certified robust accuracy calculated by RNN-Guard with InterZono is up to 2.18 times higher than that with Zonotope. In addition, we extend RNN-Guard as the first certified training method against multi-frame attacks to directly enhance RNNs' robustness. The results show that the certified robust accuracy of models trained with RNN-Guard against multi-frame attacks is 15.47 to 67.65 percentage points higher than those with other training methods.

READ FULL TEXT

page 1

page 2

page 3

page 6

research
05/17/2019

POPQORN: Quantifying Robustness of Recurrent Neural Networks

The vulnerability to adversarial attacks has been a critical issue for d...
research
01/16/2021

Multi-objective Search of Robust Neural Architectures against Multiple Types of Adversarial Attacks

Many existing deep learning models are vulnerable to adversarial example...
research
01/28/2019

Defense Methods Against Adversarial Examples for Recurrent Neural Networks

Adversarial examples are known to mislead deep learning models to incorr...
research
05/03/2021

Robust Learning of Recurrent Neural Networks in Presence of Exogenous Noise

Recurrent Neural networks (RNN) have shown promising potential for learn...
research
02/01/2020

Model Extraction Attacks against Recurrent Neural Networks

Model extraction attacks are a kind of attacks in which an adversary obt...
research
10/29/2022

Accelerating RNN-T Training and Inference Using CTC guidance

We propose a novel method to accelerate training and inference process o...
research
05/05/2022

MMINR: Multi-frame-to-Multi-frame Inference with Noise Resistance for Precipitation Nowcasting with Radar

Precipitation nowcasting based on radar echo maps is essential in meteor...

Please sign up or login with your details

Forgot password? Click here to reset