Polarizing Front Ends for Robust CNNs

02/22/2020
by   Soorya Gopalakrishnan, et al.
0

The vulnerability of deep neural networks to small, adversarially designed perturbations can be attributed to their "excessive linearity." In this paper, we propose a bottom-up strategy for attenuating adversarial perturbations using a nonlinear front end which polarizes and quantizes the data. We observe that ideal polarization can be utilized to completely eliminate perturbations, develop algorithms to learn approximately polarizing bases for data, and investigate the effectiveness of the proposed strategy on the MNIST and Fashion MNIST datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/11/2017

Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks

Deep learning has become the state of the art approach in many machine l...
research
03/08/2021

Improving Global Adversarial Robustness Generalization With Adversarially Trained GAN

Convolutional neural networks (CNNs) have achieved beyond human-level ac...
research
06/12/2022

An Efficient Method for Sample Adversarial Perturbations against Nonlinear Support Vector Machines

Adversarial perturbations have drawn great attentions in various machine...
research
08/05/2019

A principled approach for generating adversarial images under non-smooth dissimilarity metrics

Deep neural networks are vulnerable to adversarial perturbations: small ...
research
03/14/2022

Adversarial amplitude swap towards robust image classifiers

The vulnerability of convolutional neural networks (CNNs) to image pertu...
research
09/19/2020

SecDD: Efficient and Secure Method for Remotely Training Neural Networks

We leverage what are typically considered the worst qualities of deep le...
research
10/24/2018

Toward Robust Neural Networks via Sparsification

It is by now well-known that small adversarial perturbations can induce ...

Please sign up or login with your details

Forgot password? Click here to reset