Universal Adversarial Perturbations: Efficiency on a small image dataset

10/10/2022
by   Waris Radji, et al.
0

Although neural networks perform very well on the image classification task, they are still vulnerable to adversarial perturbations that can fool a neural network without visibly changing an input image. A paper has shown the existence of Universal Adversarial Perturbations which when added to any image will fool the neural network with a very high probability. In this paper we will try to reproduce the experience of the Universal Adversarial Perturbations paper, but on a smaller neural network architecture and training set, in order to be able to study the efficiency of the computed perturbation.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
12/10/2018

Defending against Universal Perturbations with Shared Adversarial Training

Classifiers such as deep neural networks have been shown to be vulnerabl...
research
05/16/2020

Universal Adversarial Perturbations: A Survey

Over the past decade, Deep Learning has emerged as a useful and efficien...
research
04/19/2017

Universal Adversarial Perturbations Against Semantic Image Segmentation

While deep learning is remarkably successful on perceptual tasks, it was...
research
12/08/2020

Locally optimal detection of stochastic targeted universal adversarial perturbations

Deep learning image classifiers are known to be vulnerable to small adve...
research
11/18/2021

Attacking Deep Learning AI Hardware with Universal Adversarial Perturbation

Universal Adversarial Perturbations are image-agnostic and model-indepen...
research
09/11/2017

Art of singular vectors and universal adversarial perturbations

Vulnerability of Deep Neural Networks (DNNs) to adversarial attacks has ...
research
04/07/2020

Universal Adversarial Perturbations Generative Network for Speaker Recognition

Attacking deep learning based biometric systems has drawn more and more ...

Please sign up or login with your details

Forgot password? Click here to reset