Defense Methods Against Adversarial Examples for Recurrent Neural Networks

01/28/2019
by   Ishai Rosenberg, et al.
0

Adversarial examples are known to mislead deep learning models to incorrectly classify them, even in domains where such models achieve state-of-the-art performance. Until recently, the research of both attack and defense methods focused on image recognition, mostly using convolutional neural networks. In recent years, adversarial example generation methods for recurrent neural networks (RNN) were published, making RNN classifiers vulnerable as well. In this paper, we present four novel defense methods to make RNN classifiers more robust against such attacks, as opposed to previous defense methods, designed only for non-sequence based models. We evaluate our methods against the state of the art attacks in the cyber-security domain, where real adversaries (malware developers) exist. Using our methods we decrease such attack effectiveness from 99.9

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/13/2021

Attack as Defense: Characterizing Adversarial Examples using Robustness

As a new programming paradigm, deep learning has expanded its applicatio...
research
06/02/2018

Detecting Adversarial Examples via Key-based Network

Though deep neural networks have achieved state-of-the-art performance i...
research
08/31/2018

MULDEF: Multi-model-based Defense Against Adversarial Examples for Neural Networks

Despite being popularly used in many application domains such as image r...
research
04/17/2023

RNN-Guard: Certified Robustness Against Multi-frame Attacks for Recurrent Neural Networks

It is well-known that recurrent neural networks (RNNs), although widely ...
research
10/26/2022

Adversarially Robust Medical Classification via Attentive Convolutional Neural Networks

Convolutional neural network-based medical image classifiers have been s...
research
03/26/2018

Clipping free attacks against artificial neural networks

During the last years, a remarkable breakthrough has been made in AI dom...
research
08/10/2023

Symmetry Defense Against XGBoost Adversarial Perturbation Attacks

We examine whether symmetry can be used to defend tree-based ensemble cl...

Please sign up or login with your details

Forgot password? Click here to reset