Kryptonite: An Adversarial Attack Using Regional Focus

08/23/2021
by   Yogesh Kulkarni, et al.
0

With the Rise of Adversarial Machine Learning and increasingly robust adversarial attacks, the security of applications utilizing the power of Machine Learning has been questioned. Over the past few years, applications of Deep Learning using Deep Neural Networks(DNN) in several fields including Medical Diagnosis, Security Systems, Virtual Assistants, etc. have become extremely commonplace, and hence become more exposed and susceptible to attack. In this paper, we present a novel study analyzing the weaknesses in the security of deep learning systems. We propose 'Kryptonite', an adversarial attack on images. We explicitly extract the Region of Interest (RoI) for the images and use it to add imperceptible adversarial perturbations to images to fool the DNN. We test our attack on several DNN's and compare our results with state of the art adversarial attacks like Fast Gradient Sign Method (FGSM), DeepFool (DF), Momentum Iterative Fast Gradient Sign Method (MIFGSM), and Projected Gradient Descent (PGD). The results obtained by us cause a maximum drop in network accuracy while yielding minimum possible perturbation and in considerably less amount of time per sample. We thoroughly evaluate our attack against three adversarial defence techniques and the promising results showcase the efficacy of our attack.

READ FULL TEXT

page 8

page 12

page 14

research
04/08/2022

AdvEst: Adversarial Perturbation Estimation to Classify and Detect Adversarial Attacks against Speaker Identification

Adversarial attacks pose a severe security threat to the state-of-the-ar...
research
05/27/2023

Adversarial Attack On Yolov5 For Traffic And Road Sign Detection

This paper implements and investigates popular adversarial attacks on th...
research
11/28/2019

Towards Privacy and Security of Deep Learning Systems: A Survey

Deep learning has gained tremendous success and great popularity in the ...
research
10/17/2020

A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models

In recent years, machine learning algorithms have been applied widely in...
research
12/07/2018

Combatting Adversarial Attacks through Denoising and Dimensionality Reduction: A Cascaded Autoencoder Approach

Machine Learning models are vulnerable to adversarial attacks that rely ...
research
02/14/2018

Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks

DNN is presenting human-level performance for many complex intelligent t...
research
09/23/2018

Adversarial Defense via Data Dependent Activation Function and Total Variation Minimization

We improve the robustness of deep neural nets to adversarial attacks by ...

Please sign up or login with your details

Forgot password? Click here to reset