High Dimensional Spaces, Deep Learning and Adversarial Examples

01/02/2018
by   Simant Dube, et al.
0

In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological arguments in high dimensions. We point out fallacy in an argument presented in a published paper in 2015 by Goodfellow et al., see reference, and present a more rigorous, general and correct mathematical result to explain adversarial examples in terms of image manifolds. Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima. Third, we show how multi-resolution nature of images explains perturbation based adversarial examples in form of a stronger result. Our results state that L_2-norm of adversarial perturbations shrinks to 0 as image resolution becomes arbitrarily large. Finally, by incorporating the parts-whole manifold learning hypothesis for natural images, we investigate the working of deep neural networks and causes of adversarial examples and discuss how future improvements can be made and how adversarial examples can be eliminated.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro