Robustness from Simple Classifiers

02/21/2020
by   Sharon Qian, et al.
0

Despite the vast success of Deep Neural Networks in numerous application domains, it has been shown that such models are not robust i.e., they are vulnerable to small adversarial perturbations of the input. While extensive work has been done on why such perturbations occur or how to successfully defend against them, we still do not have a complete understanding of robustness. In this work, we investigate the connection between robustness and simplicity. We find that simpler classifiers, formed by reducing the number of output classes, are less susceptible to adversarial perturbations. Consequently, we demonstrate that decomposing a complex multiclass model into an aggregation of binary models enhances robustness. This behavior is consistent across different datasets and model architectures and can be combined with known defense techniques such as adversarial training. Moreover, we provide further evidence of a disconnect between standard and robust learning regimes. In particular, we show that elaborate label information can help standard accuracy but harm robustness.

READ FULL TEXT
research
12/10/2018

Defending against Universal Perturbations with Shared Adversarial Training

Classifiers such as deep neural networks have been shown to be vulnerabl...
research
11/19/2022

Towards Adversarial Robustness of Deep Vision Algorithms

Deep learning methods have achieved great success in solving computer vi...
research
05/30/2018

There Is No Free Lunch In Adversarial Robustness (But There Are Unexpected Benefits)

We provide a new understanding of the fundamental nature of adversariall...
research
09/07/2018

A Deeper Look at 3D Shape Classifiers

We investigate the role of representations and architectures for classif...
research
11/15/2018

A Spectral View of Adversarially Robust Features

Given the apparent difficulty of learning models that are robust to adve...
research
06/13/2022

Pixel to Binary Embedding Towards Robustness for CNNs

There are several problems with the robustness of Convolutional Neural N...
research
12/07/2017

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Recent work has shown that neural network-based vision classifiers exhib...

Please sign up or login with your details

Forgot password? Click here to reset