A Perturbation Resistant Transformation and Classification System for Deep Neural Networks

08/25/2022
by   Nathaniel Dean, et al.
0

Deep convolutional neural networks accurately classify a diverse range of natural images, but may be easily deceived when designed, imperceptible perturbations are embedded in the images. In this paper, we design a multi-pronged training, input transformation, and image ensemble system that is attack agnostic and not easily estimated. Our system incorporates two novel features. The first is a transformation layer that computes feature level polynomial kernels from class-level training data samples and iteratively updates input image copies at inference time based on their feature kernel differences to create an ensemble of transformed inputs. The second is a classification system that incorporates the prediction of the undefended network with a hard vote on the ensemble of filtered images. Our evaluations on the CIFAR10 dataset show our system improves the robustness of an undefended network against a variety of bounded and unbounded white-box attacks under different distance metrics, while sacrificing little accuracy on clean images. Against adaptive full-knowledge attackers creating end-to-end attacks, our system successfully augments the existing robustness of adversarially trained networks, for which our methods are most effectively applied.

READ FULL TEXT
research
04/01/2019

Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

Deep neural networks are vulnerable to adversarial attacks, which can fo...
research
04/23/2020

Ensemble Generative Cleaning with Feedback Loops for Defending Adversarial Attacks

Effective defense of deep neural networks against adversarial attacks re...
research
08/08/2017

Cascade Adversarial Machine Learning Regularized with a Unified Embedding

Deep neural network classifiers are vulnerable to small input perturbati...
research
08/20/2023

Boosting Adversarial Transferability by Block Shuffle and Rotation

Adversarial examples mislead deep neural networks with imperceptible per...
research
05/15/2019

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

In this paper, we explore clean-label poisoning attacks on deep convolut...
research
09/28/2020

Where Does the Robustness Come from? A Study of the Transformation-based Ensemble Defence

This paper aims to provide a thorough study on the effectiveness of the ...
research
07/24/2017

Toward Geometric Deep SLAM

We present a point tracking system powered by two deep convolutional neu...

Please sign up or login with your details

Forgot password? Click here to reset