PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

12/25/2018
by   Mehdi Jafarnia-Jahromi, et al.
12

Deep neural networks have demonstrated cutting edge performance on various tasks including classification. However, it is well known that adversarially designed imperceptible perturbation of the input can mislead advanced classifiers. In this paper, Permutation Phase Defense (PPD), is proposed as a novel method to resist adversarial attacks. PPD combines random permutation of the image with phase component of its Fourier transform. The basic idea behind this approach is to turn adversarial defense problems analogously into symmetric cryptography, which relies solely on safekeeping of the keys for security. In PPD, safe keeping of the selected permutation ensures effectiveness against adversarial attacks. Testing PPD on MNIST and CIFAR-10 datasets yielded state-of-the-art robustness against the most powerful adversarial attacks currently available.

READ FULL TEXT

page 4

page 12

research
04/26/2020

Harnessing adversarial examples with a surprisingly simple defense

I introduce a very simple method to defend against adversarial examples....
research
08/29/2021

Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution

Recent studies have shown that deep neural networks are vulnerable to in...
research
06/25/2022

Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Despite substantial advances in network architecture performance, the su...
research
03/02/2019

PuVAE: A Variational Autoencoder to Purify Adversarial Examples

Deep neural networks are widely used and exhibit excellent performance i...
research
02/05/2021

Optimal Transport as a Defense Against Adversarial Attacks

Deep learning classifiers are now known to have flaws in the representat...
research
06/21/2023

Adversarial Attacks Neutralization via Data Set Randomization

Adversarial attacks on deep-learning models pose a serious threat to the...
research
10/26/2021

Frequency Centric Defense Mechanisms against Adversarial Examples

Adversarial example (AE) aims at fooling a Convolution Neural Network by...

Please sign up or login with your details

Forgot password? Click here to reset