Memory Defense: More Robust Classification via a Memory-Masking Autoencoder

02/05/2022
by   Eashan Adhikarla, et al.
10

Many deep neural networks are susceptible to minute perturbations of images that have been carefully crafted to cause misclassification. Ideally, a robust classifier would be immune to small variations in input images, and a number of defensive approaches have been created as a result. One method would be to discern a latent representation which could ignore small changes to the input. However, typical autoencoders easily mingle inter-class latent representations when there are strong similarities between classes, making it harder for a decoder to accurately project the image back to the original high-dimensional space. We propose a novel framework, Memory Defense, an augmented classifier with a memory-masking autoencoder to counter this challenge. By masking other classes, the autoencoder learns class-specific independent latent representations. We test the model's robustness against four widely used attacks. Experiments on the Fashion-MNIST CIFAR-10 datasets demonstrate the superiority of our model. We make available our source code at GitHub repository: https://github.com/eashanadhikarla/MemDefense

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/17/2018

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

In recent years, deep neural network approaches have been widely adopted...
research
02/24/2020

Utilizing a null class to restrict decision spaces and defend against neural network adversarial attacks

Despite recent progress, deep neural networks generally continue to be v...
research
02/28/2022

Towards Robust Stacked Capsule Autoencoder with Hybrid Adversarial Training

Capsule networks (CapsNets) are new neural networks that classify images...
research
06/16/2020

DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder

Graph neural networks (GNNs) achieve remarkable performance for tasks on...
research
03/27/2023

Mask and Restore: Blind Backdoor Defense at Test Time with Masked Autoencoder

Deep neural networks are vulnerable to backdoor attacks, where an advers...
research
08/04/2022

CIGAN: A Python Package for Handling Class Imbalance using Generative Adversarial Networks

A key challenge in Machine Learning is class imbalance, where the sample...
research
09/12/2022

CustOmics: A versatile deep-learning based strategy for multi-omics integration

Recent advances in high-throughput sequencing technologies have enabled ...

Please sign up or login with your details

Forgot password? Click here to reset