Adversarial Attacks on Image Classification Models: FGSM and Patch Attacks and their Impact

07/05/2023
by   Jaydip Sen, et al.
0

This chapter introduces the concept of adversarial attacks on image classification models built on convolutional neural networks (CNN). CNNs are very popular deep-learning models which are used in image classification tasks. However, very powerful and pre-trained CNN models working very accurately on image datasets for image classification tasks may perform disastrously when the networks are under adversarial attacks. In this work, two very well-known adversarial attacks are discussed and their impact on the performance of image classifiers is analyzed. These two adversarial attacks are the fast gradient sign method (FGSM) and adversarial patch attack. These attacks are launched on three powerful pre-trained image classifier architectures, ResNet-34, GoogleNet, and DenseNet-161. The classification accuracy of the models in the absence and presence of the two attacks are computed on images from the publicly accessible ImageNet dataset. The results are analyzed to evaluate the impact of the attacks on the image classification task.

READ FULL TEXT

page 8

page 10

page 12

page 13

page 14

page 15

page 17

page 18

research
07/04/2019

Adversarial Attacks in Sound Event Classification

Adversarial attacks refer to a set of methods that perturb the input to ...
research
04/30/2021

Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks

Recently, the vulnerability of deep image classification models to adver...
research
04/24/2023

Evaluating Adversarial Robustness on Document Image Classification

Adversarial attacks and defenses have gained increasing interest on comp...
research
04/20/2018

ADef: an Iterative Algorithm to Construct Adversarial Deformations

While deep neural networks have proven to be a powerful tool for many re...
research
02/07/2019

Robustness Of Saak Transform Against Adversarial Attacks

Image classification is vulnerable to adversarial attacks. This work inv...
research
04/12/2020

Verification of Deep Convolutional Neural Networks Using ImageStars

Convolutional Neural Networks (CNN) have redefined the state-of-the-art ...
research
12/14/2019

Deep Poisoning Functions: Towards Robust Privacy-safe Image Data Sharing

As deep networks are applied to an ever-expanding set of computer vision...

Please sign up or login with your details

Forgot password? Click here to reset