DeepAI
Log In Sign Up

An integrated Auto Encoder-Block Switching defense approach to prevent adversarial attacks

03/11/2022
by   Anirudh Yadav, et al.
0

According to recent studies, the vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically. A neural network is an intermediate path or technique by which a computer learns to perform tasks using Machine learning algorithms. Machine Learning and Artificial Intelligence model has become a fundamental aspect of life, such as self-driving cars [1], smart home devices, so any vulnerability is a significant concern. The smallest input deviations can fool these extremely literal systems and deceive their users as well as administrator into precarious situations. This article proposes a defense algorithm that utilizes the combination of an auto-encoder [3] and block-switching architecture. Auto-coder is intended to remove any perturbations found in input images whereas the block switching method is used to make it more robust against White-box attacks. The attack is planned using FGSM [9] model, and the subsequent counter-attack by the proposed architecture will take place thereby demonstrating the feasibility and security delivered by the algorithm.

READ FULL TEXT
12/07/2018

Adversarial Defense of Image Classification Using a Variational Auto-Encoder

Deep neural networks are known to be vulnerable to adversarial attacks. ...
02/18/2020

Block Switching: A Stochastic Approach for Deep Learning Security

Recent study of adversarial attacks has revealed the vulnerability of mo...
10/23/2020

Learn Robust Features via Orthogonal Multi-Path

It is now widely known that by adversarial attacks, clean images with in...
05/04/2022

CE-based white-box adversarial attacks will not work using super-fitting

Deep neural networks are widely used in various fields because of their ...
08/09/2021

Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks

Poisoning attacks are a category of adversarial machine learning threats...