Robustness and invariance properties of image classifiers

08/30/2022
by   Apostolos Modas, et al.
0

Deep neural networks have achieved impressive results in many image classification tasks. However, since their performance is usually measured in controlled settings, it is important to ensure that their decisions remain correct when deployed in noisy environments. In fact, deep networks are not robust to a large variety of semantic-preserving image modifications, even to imperceptible image changes known as adversarial perturbations. The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness. To build reliable machine learning models, we must design principled methods to analyze and understand the mechanisms that shape robustness and invariance. This is exactly the focus of this thesis. First, we study the problem of computing sparse adversarial perturbations. We exploit the geometry of the decision boundaries of image classifiers for computing sparse perturbations very fast, and reveal a qualitative connection between adversarial examples and the data features that image classifiers learn. Then, to better understand this connection, we propose a geometric framework that connects the distance of data samples to the decision boundary, with the features existing in the data. We show that deep classifiers have a strong inductive bias towards invariance to non-discriminative features, and that adversarial training exploits this property to confer robustness. Finally, we focus on the challenging problem of generalization to unforeseen corruptions of the data, and we propose a novel data augmentation scheme for achieving state-of-the-art robustness to common corruptions of the images. Overall, our results contribute to the understanding of the fundamental mechanisms of deep image classifiers, and pave the way for building more reliable machine learning systems that can be deployed in real-world environments.

READ FULL TEXT

page 18

page 42

research
12/01/2016

Towards Robust Deep Neural Networks with BANG

Machine learning models, including state-of-the-art deep neural networks...
research
02/15/2020

Hold me tight! Influence of discriminative features on deep network boundaries

Important insights towards the explainability of neural networks and the...
research
05/26/2017

Analysis of universal adversarial perturbations

Deep networks have recently been shown to be vulnerable to universal per...
research
02/05/2020

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study

Despite achieving remarkable performance on many image classification ta...
research
05/26/2017

Classification regions of deep neural networks

The goal of this paper is to analyze the geometric properties of deep ne...
research
04/29/2021

A neural anisotropic view of underspecification in deep learning

The underspecification of most machine learning pipelines means that we ...
research
11/06/2018

SparseFool: a few pixels make a big difference

Deep Neural Networks have achieved extraordinary results on image classi...

Please sign up or login with your details

Forgot password? Click here to reset