On Lyapunov exponents and adversarial perturbation

02/20/2018
by   Vinay Uday Prabhu, et al.
0

In this paper, we would like to disseminate a serendipitous discovery involving Lyapunov exponents of a 1-D time series and their use in serving as a filtering defense tool against a specific kind of deep adversarial perturbation. To this end, we use the state-of-the-art CleverHans library to generate adversarial perturbations against a standard Convolutional Neural Network (CNN) architecture trained on the MNIST as well as the Fashion-MNIST datasets. We empirically demonstrate how the Lyapunov exponents computed on the flattened 1-D vector representations of the images served as highly discriminative features that could be to pre-classify images as adversarial or legitimate before feeding the image into the CNN for classification. We also explore the issue of possible false-alarms when the input images are noisy in a non-adversarial sense.

READ FULL TEXT

page 3

page 4

page 5

page 7

page 8

page 10

research
04/05/2020

Approximate Manifold Defense Against Multiple Adversarial Perturbations

Existing defenses against adversarial attacks are typically tailored to ...
research
10/10/2019

Universal Adversarial Perturbation for Text Classification

Given a state-of-the-art deep neural network text classifier, we show th...
research
05/29/2021

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

Recent researches show that deep learning model is susceptible to backdo...
research
11/30/2018

Adversarial Defense by Stratified Convolutional Sparse Coding

We propose an adversarial defense method that achieves state-of-the-art ...
research
11/16/2017

Defense against Universal Adversarial Perturbations

Recent advances in Deep Learning show the existence of image-agnostic qu...
research
12/10/2019

Feature Losses for Adversarial Robustness

Deep learning has made tremendous advances in computer vision tasks such...
research
08/01/2016

Early Methods for Detecting Adversarial Images

Many machine learning classifiers are vulnerable to adversarial perturba...

Please sign up or login with your details

Forgot password? Click here to reset