Btech thesis report on adversarial attack detection and purification of adverserially attacked images

05/09/2022
by   Dvij Kalaria, et al.
0

This is Btech thesis report on detection and purification of adverserially attacked images. A deep learning model is trained on certain training examples for various tasks such as classification, regression etc. By training, weights are adjusted such that the model performs the task well not only on training examples judged by a certain metric but has an excellent ability to generalize on other unseen examples as well which are typically called the test data. Despite the huge success of machine learning models on a wide range of tasks, security has received a lot less attention along the years. Robustness along various potential cyber attacks also should be a metric for the accuracy of the machine learning models. These cyber attacks can potentially lead to a variety of negative impacts in the real world sensitive applications for which machine learning is used such as medical and transportation systems. Hence, it is a necessity to secure the system from such attacks. Int this report, I focus on a class of these cyber attacks called the adversarial attacks in which the original input sample is modified by small perturbations such that they still look visually the same to human beings but the machine learning models are fooled by such inputs. In this report I discuss 2 novel ways to counter the adversarial attack using AutoEncoders, 1) by detecting the presence of adversaries and 2) purifying these adversaries to make target classification models robust against such attacks.

READ FULL TEXT

page 11

page 13

page 27

research
04/10/2020

Adversarial Attacks on Machine Learning Cybersecurity Defences in Industrial Control Systems

The proliferation and application of machine learning based Intrusion De...
research
02/19/2021

Fortify Machine Learning Production Systems: Detect and Classify Adversarial Attacks

Production machine learning systems are consistently under attack by adv...
research
08/29/2022

Towards Adversarial Purification using Denoising AutoEncoders

With the rapid advancement and increased use of deep learning models in ...
research
11/27/2018

Robust Classification of Financial Risk

Algorithms are increasingly common components of high-impact decision-ma...
research
10/20/2022

New data poison attacks on machine learning classifiers for mobile exfiltration

Most recent studies have shown several vulnerabilities to attacks with t...
research
07/31/2020

Class-Oriented Poisoning Attack

Poisoning attacks on machine learning systems compromise the model perfo...

Please sign up or login with your details

Forgot password? Click here to reset