Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks

04/01/2019
by   Aamir Mustafa, et al.
0

Deep neural networks are vulnerable to adversarial attacks, which can fool them by adding minuscule perturbations to the input images. The robustness of existing defenses suffers greatly under white-box attack settings, where an adversary has full knowledge about the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such perturbations is the close proximity of different class samples in the learned feature space. This allows model decisions to be totally changed by adding an imperceptible perturbation in the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks. Specifically, we force the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the art defenses.

READ FULL TEXT
research
02/02/2018

Hardening Deep Neural Networks via Adversarial Model Cascades

Deep neural networks (DNNs) have been shown to be vulnerable to adversar...
research
03/21/2018

Adversarial Defense based on Structure-to-Signal Autoencoders

Adversarial attack methods have demonstrated the fragility of deep neura...
research
08/25/2022

A Perturbation Resistant Transformation and Classification System for Deep Neural Networks

Deep convolutional neural networks accurately classify a diverse range o...
research
12/13/2019

Potential adversarial samples for white-box attacks

Deep convolutional neural networks can be highly vulnerable to small per...
research
03/28/2018

Defending against Adversarial Images using Basis Functions Transformations

We study the effectiveness of various approaches that defend against adv...
research
06/25/2023

A Spectral Perspective towards Understanding and Improving Adversarial Robustness

Deep neural networks (DNNs) are incredibly vulnerable to crafted, imperc...
research
10/17/2022

DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers

Certified defense using randomized smoothing is a popular technique to p...

Please sign up or login with your details

Forgot password? Click here to reset