High Dimensional Spaces, Deep Learning and Adversarial Examples

01/02/2018
by   Simant Dube, et al.
0

In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological arguments in high dimensions. We point out fallacy in an argument presented in a published paper in 2015 by Goodfellow et al., see reference, and present a more rigorous, general and correct mathematical result to explain adversarial examples in terms of image manifolds. Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima. Third, we show how multi-resolution nature of images explains perturbation based adversarial examples in form of a stronger result. Our results state that L_2-norm of adversarial perturbations shrinks to 0 as image resolution becomes arbitrarily large. Finally, by incorporating the parts-whole manifold learning hypothesis for natural images, we investigate the working of deep neural networks and causes of adversarial examples and discuss how future improvements can be made and how adversarial examples can be eliminated.

READ FULL TEXT

page 3

page 12

research
06/18/2021

The Dimpled Manifold Model of Adversarial Examples in Machine Learning

The extreme fragility of deep neural networks when presented with tiny p...
research
11/20/2019

Logic-inspired Deep Neural Networks

Deep neural networks have achieved impressive performance and become de-...
research
01/30/2019

A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance

The existence of adversarial examples in which an imperceptible change i...
research
03/24/2023

How many dimensions are required to find an adversarial example?

Past work exploring adversarial vulnerability have focused on situations...
research
03/20/2020

Adversarial Examples and the Deeper Riddle of Induction: The Need for a Theory of Artifacts in Deep Learning

Deep learning is currently the most widespread and successful technology...
research
11/21/2017

The Manifold Assumption and Defenses Against Adversarial Perturbations

In the adversarial-perturbation problem of neural networks, an adversary...
research
06/19/2020

Using Learning Dynamics to Explore the Role of Implicit Regularization in Adversarial Examples

Recent work (Ilyas et al, 2019) suggests that adversarial examples are f...

Please sign up or login with your details

Forgot password? Click here to reset