Defense Methods Against Adversarial Examples for Recurrent Neural Networks

01/28/2019
by   Ishai Rosenberg, et al.
0

Adversarial examples are known to mislead deep learning models to incorrectly classify them, even in domains where such models achieve state-of-the-art performance. Until recently, the research of both attack and defense methods focused on image recognition, mostly using convolutional neural networks. In recent years, adversarial example generation methods for recurrent neural networks (RNN) were published, making RNN classifiers vulnerable as well. In this paper, we present four novel defense methods to make RNN classifiers more robust against such attacks, as opposed to previous defense methods, designed only for non-sequence based models. We evaluate our methods against the state of the art attacks in the cyber-security domain, where real adversaries (malware developers) exist. Using our methods we decrease such attack effectiveness from 99.9

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset