1. Introduction
In last decade, there has been proliferation of applications of deep neural networks in the general field of Artificial Intelligence (AI) and specifically in computer vision, speech recognition and natural language understanding. In AI community there has been debate centered around need for greater mathematical rigor and understanding of deep learning. Though there has been significant progress in practical techniques based on impressive trialanderror empirical work, theory has been lagging behind practice. Can there be simple mathematical results which shed light on how deep learning works? How can one understand shortcomings of present day deep learning which can pave way for future work?
In machine learning theory, there are several very wellknown fundamental theoretical results. In this paper, we present several novel results on adversarial examples, optimization landscape, local minima and image manifolds, that provide mathematical rigor to the specific field of deep neural networks which operate in very high dimensions. We apply results from very high dimensional mathematical spaces to deep learning which is the primary goal of the paper.
There has been already significant research in unraveling how and why deep learning works. In [4]
, under certain assumptions and using results from random matrix theory applied to spinglasses, authors evaluate loss surfaces of multilayer feed forward neural networks. Their results indicate that there is a layered structure of critical points. Near global minimum, most of critical points are local minima. In higher bands, we start seeing saddle points of increasing index and the probability of finding local minima decreases exponentially. In
[7], based on evidences from several directions, such as statistical physics, random matrix theory and experimental work, strong thesis is presented which states that deep networks don’t suffer from local minima problem and instead suffer from saddle points which can give illusory appearance of local minima. In [19], under certain assumptions which includes having very large and wide neural networks, it is shown that local minima are almost always close to the global minimum. In [22], empirical work to look at distribution of eigenvalues of the Hessian matrix of the loss function is presented on simple examples and it is observed that the Hessian is very singular for these examples. In
[14, 20], singularity of the Hessian is shown for underdetermined overparameterized systems.Lot of interesting work has been done in the area of adversarial examples, see [10, 13, 18, 27]. See survey in [1]. There are two kinds of adversarial examples. First kind is perturbation based in which an imperceptible perturbation is added to an image to change the output of the deep network, see [27]
. In the second kind, images which are unrecognizable by humans are classified by deep networks with high probability, see
[18]. See Figure 1 for adversarial examples. Adversarial examples tell us something fundamental about the way present day deep networks work. The hypothesis presented in [12] which states that CNNs are learning superficial cues rather than highlevel semantic abstractions matches with the conclusions in this paper. Adversarial examples are of great practical significance too as they can pose risks to real world applications of deep learning, see [13]. For theoretical results on adversarial examples see [11], where instancespecific lower bounds on the norm of the input manipulation required to change the classifier decision are given using theorems from calculus, under the assumption that the classifier is continuously differentiable. In this paper, the approach is quite different and we give geometrical proofs for arbitrary manifold geometries which show direct relationship of the norm to the input dimension using properties of highdimensional spaces in the most general case without any differentiability assumptions.In this paper, we point out mistake in an argument presented in a published paper in 2015 by Goodfellow et al., see reference [10], where authors present argument to explain adversarial examples for linear model . Problem in reasoning behind their argument has been independently highlighted earlier in [28]. The authors in [10] argue that as feature dimensionality of the linear model increases, one can increase total perturbation amount larger and larger, while keeping norm constant and imperceptibly small, till it reaches a desired target activation value to flip the classification result of the linear model. The problem in this argument is the assumption that desired target value remains constant and is independent of
. But as dimensionality increases, we are working in different spaces and the norms of all vectors change, so it becomes a moving target. The norms of weight vector
and increase too. Therefore increasing does not help in finding adversarial examples for most samples as they are far away from the decision boundary. In order to generate adversarial examples, we will be forced to relax norm constraint to a higher value which will depend on the distance of sample from the decision boundary and these distances either will have no upper bound or could be very large (depending on feature value range) in general and therefore perturbation will be mostly not small. Consequently, the generalization of the argument in [10] to deep networks is not valid. In Theorem 5, we state that the linear model does not suffer from adversarial examples.In this paper, we present general, rigorous and correct mathematical results which work on any bounded image manifold. They work on manifolds carved out by deep networks through piecewise linear approximation as special case, provided they are bounded manifolds.
We present our results restricting ourselves to image classification problem in computer vision. Overview of our paper is as follows. Our first result in Section 2 explains why one can generate perturbation based adversarial examples in deep learning. Then in Section 3
we look at the question why deep learning often works well in practice without getting stuck in local minima when using stochastic gradient descent (SGD). To understand that we make use of results about bounds on number of critical points of normal random polynomials. In Section
4 we use the Manifold Learning hypothesis and statistics of natural images to understand the nature of highdimensional image manifolds which allows us to make our mathematical results stronger. This also provides insights into the second kind of adversarial examples which are random looking noisy images giving high confidence outputs. In Section 5, we present results to solve the problem of adversarial examples. We first investigate complexity of surfaces of image manifolds and apply empirical results from [15] along with theoretical results in this paper to understand the root causes of adversarial examples. Based on recent work in [21] on capsule networks, we consider PartsWhole Manifold Learning Hypothesis to understand limitations of the present day deep learning, which shows the way for elimination of adversarial examples and for future improvements in deep learning and in the general field of AI.2. Adversarial Examples
It has been shown that deep learning suffers from the problem of adversarial examples, see Figure 1. One can slightly perturb a sample in such a way that the output of deep network changes. In case of images, the perturbed image is visually indistinguishable from the original.
First let us define the notion of an image and that of a deep learning system.
Definition: An infinite resolution grayscale image is a function on 2D unit square, . A finite resolution version of the image is obtained by approximating in a grid of pixels. An image class is a set of infinite resolution images which belong to a semantic category.
In practice, we will have finite resolution versions of the image class where particular resolution is constrained by memory and computing power. As technology progresses, images will have increasing resolutions. This has significant impact on mathematical analysis of deep learning as will be shown in this paper.
Definition: A deep learning image classification system is a 4tuple where:

is the deep neural network.

is the training data for positive classes and the negative class . consists of images at some finite resolution . The negative class contains sample images belonging to none of the positive classes.

is the set of ground truth compact manifolds for classes, where for all , contains all images at resolution belonging to class .

is the set of trained compact manifolds for classes. Once has been trained using , for all , is the trained manifold for class and the goal of the training is to make it approximate as closely as is practically possible.
The type of set is chosen to be a manifold rather than an arbitrary set to emphasize that we are dealing with semantically meaningful natural image classes. The idea of image manifolds has been found to be useful in computer vision, see [24, 16]. Given a natural image, assumption that there is a locally Euclidean neighborhood around it in the set is a reasonable one [16]. Bounded dynamic range of image class implies that manifold is bounded. The greyness value of any image pixel can not shoot off to infinity. And if there is a convergent sequence of images, then the limiting image should be included in the set too, making the manifold compact. Though even if the mathematical results can hold for arbitrary sets, we will see in later sections that restricting to manifolds is conceptually helpful in understanding natural images.
Let be a randomly selected image and its adversarial example, respectively. Let where is the perturbation image. We prove our first result which shows why it becomes easier to create a visually distinguishable adversarial example as resolution of images increases. The expected value of norm of the perturbations becomes very small for highdimensional images.
It should be noted that the following results are applicable to any machine learning model provided conditions for the theorems are met. In classical machine learning too, which approximate ground truth manifolds by trained manifolds, as the dimensionality of input features will increase, so will the problem of adversarial examples. Since deep learning takes raw data as the input which has high dimensions, it is particular relevant to deep neural networks.
We have defined adversarial example in terms of any which gets positively classified. One could have focused only on correctly and positively classified samples. Since underlying practical assumption is to work with deep networks which yield high accuracy, we restrict to the mathematically simpler case of positively classified samples. A positively classified sample is very likely to be correctly classified in such high accuracy networks. An adversarial example is one which makes the output of the deep network change with visually insignificant perturbation.
Even if the trained manifolds are identical to the ground truth manifolds, note that there will be always images at the surface of these manifolds for which the output of the deep network changes on slight perturbation and they will always have adversarial examples by this definition. But we expect this only to be true only for borderline images and not for almost every image, see Figure 2. What will be truly intriguing if we can mathematically prove that almost every image happens to have this property of having an adversarial example. We will show that this indeed follows from intriguing properties of highdimensional spaces.
2.1. Image Manifolds which are balls
Suppose one can perturb images very slightly to get adversarial examples and we were to bound this perturbation amount. An interesting question to ask is that given an image how probable it is to be able to perturb it within this bound and successfully get an adversarial example. For that, we first need the following Lemma.
Lemma 1.
Consider any image classification problem at any finite resolution . For any random sample which is classified positively by the deep network for one of the classes, let be the closest adversarial example where and let where is the perturbation image. Denote the perturbation norm for by .
Further, assume that trained manifold for the image class which belongs to, is topologically an ball of radius
Let , where , be the relative perturbation bound with respect to the radius .
Then,
Proof.
Let the surface area of the ball of radius be . The closest adversarial example for will be just outside surface of and its distance from will be just above the shortest distance of from the surface of the ball.
For any given where , points in the ball will have adversarial examples within distance if they are within the outermost spherical shell of width . We calculate the ratio of the volume contained in outermost spherical shell to the total volume,
If is at distance from the center, then is its distance to ,
∎
We have straightforward corollary.
Corollary 2.
For any ,
Furthermore,
See Figure 3. Note that we can not conclude yet that expectation of tends to 0 as approaches since we don’t know how radius of image manifold will increase with . There are two cases.

For 8bit images, and is surrounded by a hypercube of diameter . Thus, , and shrinks to 0.

For idealized images with arbitrary real pixel values, . See Section 4 that does approach 0 for natural image manifolds.
2.2. Image Manifolds with Arbitrary Geometries
Now we present the main result on adversarial examples. We first define concept of radius of a manifold.
Definition: Let be an dimensional bounded manifold with finite volume. Then, its radius is
The diameter of the manifold, which is maximum pairwise distance, can be viewed as dynamic range of the corresponding image class. We want to bound the perturbation image relative to the radius. See Figure 4.
Theorem 3.
Consider any image classification problem at infinite resolution. Consider any solution of this problem by a deep learning image classification system for any finite resolution . For any random sample which is classified positively for one of the image classes , let be the closest adversarial example where and let where is the perturbation image. Denote the perturbation norm for by . Denote the trained manifold for image class , for which was positively classified, by .
Assume that all trained manifolds in are dimensional sets. Let , where be the perturbation bound.
Then, over all ,
and
Proof.
This is general case when the manifolds are arbitrary. For a given finite volume, ball minimizes the surface area and therefore maximizes the average distance of a point to the surface, which follows from the isoperimetric inequality [8]. The ball is the worst case manifold for the proof and if manifolds are all balls, then the theorem is proved by Lemma 1. For any other geometry, the probability of being close to surface of the object will be strictly higher than the case for ball and the average distance to the surface will be smaller.
Without loss of generality consider the case when there is a single positive image class and a negative image class. Let the trained manifold be in . Let be ball such that
Then,
and for all ,
and
Therefore, for ,
Therefore limit follows for manifolds with arbitrary geomteries.
Proof of second part for expectation follows using similar steps. ∎
For bit images, and is surrounded by a hypercube of diameter of the order of . Thus, . Therefore we have the following theorem.
Theorem 4.
Let the pixel values be in a finite bounded range. Then,
Note that for the proof to work, we implicitly require that each positive image manifold is surrounded completely by its complement which will be the case if it is compact and the space is . For 8bit images the total space in and as long as the positive manifolds are surrounded by their respective complements so that one can move out of the surfaces into the complement regions, proof will hold.
2.3. Case of Linear Models
In Section 3 of the reference [10], it is argued that even simple linear models can suffer from adversarial negatives and problem in this argument has been independently highlighted earlier in [28]. The authors consider a simple linear model . Let be the adversarial negative with being the perturbation. See Figure 6.
Then,
In [10], norm of perturbation is constrained to a small value which is fixed to be the smallest greyness granularity, which is 1/255 of the dynamic range for 8bit images. It is first argued that if is the average magnitude of coefficients of the weight vector and we choose to be closely aligned with , then for dimensions, will increase with to reach any target which will push the point on the other side of the decision hyperplane. We want
to reach activation level to be negative of
. It is argued in [10] that this is possible with increasing . With increasing , also increases because and increase in their magnitudes as they have more elements in higher dimensional spaces. Therefore if we keep norm of perturbation constrained as in [10] to some small , we can not generate adversarial examples using this argument for the linear model. In order to generate adversarial examples, we will have to set norm constraint to a higher value which will depend on the distance of from the decision boundary. But then there is no upper bound on the max norm due to fact that there is no upper bound on the distance of feature vector to the decision hyperplane. Even if we constrain the feature value range to be finite, such as [0,255] for 8bit images, even then the distances can be fairly large in general case. Therefore, the generalization of the argument in [10] to deep networks is invalid.Therefore, we can state the following.
Theorem 5.
The linear model does not suffer from adversarial examples under norm constraint for any .
Proof.
Proof follows from the fact that for any resolution ,
where is set of nonnegative reals. ∎
At the same time, Theorem 3 in this paper will hold for all bounded manifolds. This is because image manifolds are assumed to be bounded and surrounded by their complement. As increases, the surface increases rapidly having more volume close to it, thereby bringing points closer to some surface where adversarial examples exist. The size of the manifold also increases and therefore the perturbation bound in Theorem 3 is expressed relative to this size. The size of the manifold does not increase that fast, and therefore perturbations become smaller even in absolute sense. This will also hold if the manifold is bounded from all sides by hyperplanes as a special case as shown in Figure 6
. Deep networks perform this piecewise linear approximation of functions with ReLU activation, see
[17], so our results apply to deep networks as a special case.3. Stochastic Gradient Descent and Optimization Landscape
Let’s consider the question why the deep learning works well. Why has there been success in training deep networks? One of the reasons is that it does not get stuck in local minima, see [7]. Neural networks of smaller sizes get trapped in local minima but this seems to be unlikely in the case of deep networks. It has been empirically shown that most of critical points encountered during stochastic gradient descent are saddle points as they are more numerous than local minima, see [4, 7] for these results and how random matrix theory can be applied to investigate this.
3.1. Optimization Landscape Polynomials
We adopt a related but different approach and use results from theory of random polynomials which give explicit bounds on number of critical points. In order to understand this well we will bound the number of saddle points and that of local minima by approximating loss surfaces, also referred to as optimization landscapes, with polynomials. For polynomial approximation of deep networks, see [14, 20] where ReLU is approximated by a polynomial and results on singularity of the Hessian are derived for overparameterized systems.
Key observation is that all the operations used in deep learning are continuous even though they may not be differentiable.
Theorem 6.
Optimization landscape of a deep learning network can be approximated by a polynomial.
Proof.
The proof follows from StoneWeierstrass approximation theorem which states that any continuous function on a compact set can be approximated by a polynomial [25]. ∎
The continuity of functions is obvious for convolution layers, fully connected layers and differentiable loss functions. This is true of nonlinear layers such as ReLU and Max Pooling. Though they are not differentiable but they are continuous as their graphs don’t have breaks for continuous inputs. ReLU is piecewise linear. To see immediately how ReLU activation layer can be approximated by a polynomial, notice that it can be written as follows
in terms of absolute function which is known to have polynomial approximation. Max pooling is continuous for continuous inputs. Smooth versions of max are wellknown. Notice that max can be expressed as follows
In fact, we have two kinds of polynomials:

Optimization Landscape Polynomials. We fix the image pixel values. Neural networks parameters are variables.

Image Polynomials. We fix the neural network parameters. Image pixel values are variables.
In both cases, we will have a very large polynomial. The degree of the polynomial goes up as layers in the network increase. As number of network parameters and image resolution go up, then the number of variables goes up.
In this section, we are concerned with optimization landscape polynomials. In Section 4, we will discuss image polynomials.
What can we say about number of saddle points and that of local minima of deep network polynomials? Since deep learning is a practical field, one will have to make assumptions based on empirically derived statistics on the coefficients of the polynomials based on particular applications. One will have to endow the space of deep network polynomials with a probability measure in order to derive mathematical results. We make use of results from theory of random polynomials endowed with Gaussian probability measure with certain assumptions on variances, see
[6, 5], which provides evidence why deep learning works well in practice.Theorem 7 (Critical points of random polynomials, see [6]).
Let denote the expected number of critical points of a random polynomial of degree at most in variables, and the expected number of minima. Let be probability that a critical point is a local minima. Then,
and
for some positive constant .
Corollary 8.
For some positive constant ,
For proof, see [6, 5]. This result shows that most large random polynomials have only saddle points and local minima become increasingly rare. Assuming this result holds for most practical problems solved by deep learning, this implies that the number of local minima becomes arbitrarily small compared to number of saddle points as the resolution of images and size and depth of deep networks increase. The probability that all eigen values of the Hessian matrix of these polynomials are positive falls off very rapidly [5].
Under the assumption that the loss surfaces of deep neural networks are normal random polynomials as in [6, 5], we can state the following.
Proposition 1.
Let denote the ratio of the expected number of minima to the expected number of critical points of optimization landscape of a deep neural network of size . Then,
3.2. Optimization on Loss Surfaces
It has been shown in [3, 4, 7], both theoretically under certain assumptions as well as empirically, that the local minima are concentrated towards the bottom of the optimization landscape. Therefore during SGD we are likely to encounter local minima only towards the end of training which correspond to good enough solutions to practical problems. Earlier in the training, it is quite likely that slowing down is occuring because of a saddle point.
The above results also indicate why SGD and various tricks and techniques work in deep learning. Assume for a particular minibatch of training images, the deep network is at local minima or saddle point. As new minibatches arrive with their varying image statistics, optimization landscape changes and it becomes less likely that we will continue to be at the same critical point and it is more likely that we will escape from it despite slow down in the training time. This provides justification for stochastic algorithms.
One can also ask why empirically tested techniques such as ReLU activation, batch normalization and dropout have been effective in making training faster and more robust. Consider ReLU in which a dead neuron can become alive or vice versa thereby having ripple effects on the neurons it is connected with in higher layers and altering the optimization landscape significantly. This would probabilistically help SGD in escaping from a critical point. Let
index be the fraction of negative eigen values of the Hessian matrix. Any empirically discovered technique which results in more than 50% chance of reduction in index will help the training. Higher the chance, more effective it will be. These techniques perturb the optimization landscape in a statistically beneficial way and therefore they are helpful in making training faster with better solutions.Proposition 2.
Any technique which speeds up escape from a critical point will speed up the deep learning training time. Furthermore, any technique which speeds up reduction of the Hessian index will speed up the deep learning training time.
4. Statistics of Natural Images, Adversarial Examples and Manifold Learning Hypothesis
Natural images have their own particular characteristics. We now derive stronger results for them.
4.1. Perturbation Adversarial Examples
It has been know that natural signals follow 1/ process. For natural images which are 2D signals is around 2 [23]. The power in different 2D frequencies in natural images is inversely proportional to square of frequencies. Let us formulate this in terms of discrete wavelet multiresolution representation of natural images. A 2D image consists of low frequency version of the image combined with high frequencies. By adding higher frequencies, the image resolution is doubled. Wavelets decompose the image into frequency subbands. LL subband corresponds to low frequencies and LH, HL and HH subbands to high frequencies of different orientations, see [26]. See Figure 7. LH frequencies are crossproduct of 1D low frequencies in vertical direction and 1D high frequencies in horizontal direction. HL are other way round. HH are high frequencies in both directions. Therefore the norm of the image increases as resolution is doubled. It increases primarily because there are more pixels in the image. Besides due to upsampling of existing low frequencies, it increases due to the addition of higher frequencies in the next octave but this increase follows a decreasing geometric sequence as per 1/ process for LH and HL frequencies and 1/ process for HH frequencies.
This allows us to compute the bound on radius of manifolds as the resolution increases. Underlying assumption is that we are working with high accuracy deep neural networks and trained and ground truth manifolds have approximately same radii, and in fact, can be even of the same order of magnitude for proof to work fine.
Theorem 9.
For a semantic class of natural images, let be ground truth image manifold in space for finite resolution , where , and let the images at different resolutions follow multiresolution orthonormal wavelet representation which obeys 1/ and 1/ power spectrum processes. Let be the trained manifold and assume . Then,
Proof.
Consider a image with power (energy per pixel) in its low frequencies and , and in high frequencies. Consider its next higher resolution . Power will be now in low frequencies and , and in high frequencies. In limit, we will have
Therefore energy will be bounded by
Orthonormal property ensures that energies in image space and in frequency space are same. Therefore norm of any image will be bounded
for some constant . Therefore,
and since radius of is approximately very close to that of ,
∎
For natural images, we can now improve the results on perturbation adversarial examples which we derived in Theorem 3.
Proof.
The proof is obvious for balls using Lemma 1 and Theorem 9. For manifolds with arbitrary geometries, balls are worst case scenario using the same arguments as in Theorem 3. For arbitrary manifolds more volume is concentrated near the surface area compared to balls with same volumes as per the isoperimetric inequality, see [8]. ∎
4.2. Unrecognizable Adversarial Examples
Besides power spectrum properties we can apply the Manifold Learning Hypothesis
to understand geometry of image manifolds. The Manifold Hypothesis states that most natural image classes at large enough resolution
form manifolds which are embedded in a topological subspace with dimensionality , see [9, 16]. We can consider this subspace with much lower dimensionality as pose space where each point corresponds to a pose of the image as determined by some pose parameters. See Figure 8.In addition to adversarial examples discussed in Theorems 3 and 10, it has been shown that it is easy to generate artificial images, some of which could visually look like random noise and are unrecognizable by humans, for which the deep network returns a positive class with high probability. See Figure 1. One starts with some completely random image, which could be just pure noise, and performs gradient ascent on image pixels to maximize the output probability for some class. Soon the algorithm converges to a fake unrecognizable image with high enough output probability.
This is easy to understand if there is no negative class and there are only positive classes. Discriminative loss based training does not care what happens to negative space then. If there is negative class, then we need to understand it better. This phenomenon can be then understood using Manifold Hypothesis and Theorem 7. Manifolds of positive image classes occupy very small volume in high dimensional spaces compared with the complement corresponding to the negative class, as per the Manifold Hypothesis.
Proposition 3.
Consider 8bit grayscale images. Assume the Manifold Learning Hypothesis is true. Let image manifolds be dimensional topological objects in the dimensional space . Let manifolds have finite dimensional volume. Then as becomes arbitrarily large, the volume of positive image manifolds becomes arbitrarily small compared to that of the surrounding negative space.
To prove the above proposition for a simple case, consider a resolution for 8bit images. Let be the number of all possible images in the universe and let be the number of images in a particular semantic class . Volume computation reduces to counting number of images in discrete domain. Consider next resolution . For each pixel of each image in at resolution
, we have 3 new pixels (in general, 3 degrees of freedom) as it is upsampled to
region. There are possible choices of values of these pixels. Assume that for each pixel for any image in , there are choices for these 3 pixels due to semantic constraints which forces neighboring pixels to have strong correlation with each other in natural images. Then,and will become arbitrarily large compared with as increases. In fact, because of 1/ process, will become smaller with , though for the proof we just need .
An object with finite dimensional measure will have zero dimensional measure if . And as tends to
it is not possible to have enough training data for the negative class due to this curse of dimensionality. In order to get enough training data for negative class, we will have to sample points from a volume just outside the surface of the manifold. This volume is
dimensional space surrounding an dimensional object. Even if manifold was dimensional, since most of the volume is concentrated on the surface, this volume for negative samples will be very large, see Theorem 11 in later section. Therefore,Consider the case when we are not able to completely surround the manifold by negative training samples, and there are some gaps. Training of deep networks does not guarantee anything in large spaces which are not covered by the training data. Therefore, the training may not choose to cover this gap under discriminative loss function depending upon the local geometry of the manifold. Fake unrecognizable adversarial examples will be then found in the negative space through this gap. See Figure 9. In the figure, we see why under discriminative loss, the deep network may not invest in neurons to close such gaps, thereby creating fake adversarial space.
At the same time, number of critical points is increasing as underlying Image Polynomials become large, see Theorem 7
. Goal of training is to approximate the characteristic functions of the image manifolds (probability value 1 inside a image manifold and 0 elsewhere). The final Image Polynomials for different classes approximate these functions under discriminative loss optimization. Note that in image polynomials, variables are image pixels and the deep network parameters contribute to coefficients of the polynomials. Therefore it becomes easier to find these random looking adversarial samples from the negative space, which has much larger volume, using gradient ascent algorithm. See Figure
10.Proposition 4.
Fake unrecognizable adversarial examples correspond to critical points of Image Polynomials in increasingly large negative space and which become more and more numerous with increasing resolution of images and increasing size of networks trained under discriminative loss function, as given by Theorem 7 and Theorem 6.
Another application of the Manifold Hypothesis is in providing the following justification for deep learning. In deep learning, feature engineering is done automatically. A deep neural network can be understood as a feature extractor followed by a classifier. The first part transforms the input of dimensionality to a feature of dimensionality , which ideally should correspond to subspace dimensionality of the image manifold, and then this feature extractor is used for subsequent classification (or regression) tasks.
5. Eliminating Adversarial Examples and Discussion
How can we eliminate adversarial examples? Is it curse of dimensionality that image classes will always have border and every sample eventually happens to be close to the border in high dimensions? In this section, we dive into problem which also suggests way for future improvements of deep learning.
5.1. Training Data
A natural idea to mitigate the problem of adversarial examples is to include them in the training data. However, no matter how much you include these cases, you can not eliminate the problem due to Theorems 3 and 10.
As long as there is surface, the problem persists. One can question if the concept of hard borderline for ground truth manifolds is robust. Take an image of a cat and start modifying it. When does it stop being a cat? It could very well be subjective opinion. One can consider dilating the manifold by including all those images which are visually somewhat close to cat images and human judgment assigns them a probability less than 1 of being a cat. This creates a halo around the manifold. Hopefully this will mitigate the problem of adversarial examples when we perturb only those images for which ground truth probability is 1. If the original ground truth manifold is a subset of trained manifold obtained by training on the expanded manifold then there won’t be any adversarial examples as per original ground truth.
Let’s quantify the need for additional training data for this purpose, some of which can be created using data augmentation techniques. Define the dilation of a ground truth manifold by ball of radius to be the set
where is ball centered at of radius .
What should be the value of dilation ? That could be dependent on the image class and its radius . We will consider two cases. The first case is optimistic and we use the bound from Lemma 1
. Second case is pessimistic where we estimate
to be a fixed fraction of .Theorem 11.
Let be ball of radius . Dilate by an ball of radius . Let . Then,

If , then

If , then
Proof.
The proof follows from the computation of volumes using integration as in Lemma 1. For first part, evaluate
which is
∎
Thus in worst case, we may need much more training data which may be practically infeasible.
The concept of borderline halo makes us consider new definitions of what ground truth is and what test error is. We can define dilated test error of deep network to be 0 if
where is trained manifold. And by defining where as dilated ground truth, we can develop intuition behind the statement that almost everything is at surface. What is really true then is that almost everything is an augmented and borderline image.
The above theorem can be also applied to the case of fake unrecognizable adversarial images, as it quantifies the need for training data for negative samples which surrounds the manifold in a similar fashion.
In high dimensions, any practical size training data will be sparse. And with more diversity in images due to perturbations, we will need larger capacity networks. This limits the practical applicability of this solution which attempts to create a distance from the surface of the manifold. The simplistic brute force manifold dilation is a subjective reinterpretation of ground truth and the need for larger training data is independent of that. One can consider more sophisticated methods to get additional training data but one has to ask if the neural networks will be able to use it effectively. Training data always helps but the roots of the problem may be deeper than just lack of enough training data, see Figure 11.
5.2. Surfaces of Image Manifolds
So far, assumption was made that we are interested in ever increasing dimensionality as in Theorems 3 and 10. But the Manifold Hypothesis indicates that for an image class, there exists a finite topological dimension of underlying manifold. So assumption of arbitrarily large is wrong and therefore in principle we should be able to eliminate adversarial examples, as we are not interested in the limit in theorems, but in the perturbation bound for finite case.
Even though may be finite, the image manifolds have complex geometries, see Figure 11. Though , the dimensional image manifold can still twist and turn around in the whole dimensional space, very much like how a fractal curve does. A fractal curve can turn around so much that it can result into an everywhere continuous curve which is nowhere differentiable or even fill up the entire space. We will build visual intuition behind the complexity of manifold surfaces.
In fact, objects with sharp boundaries belong to manifolds which are nondifferentiable everywhere, see [29].
To build further intuition behind the complexity of manifold surfaces, consider the multiresolution family of manifolds for an image class and how one obtains from by image zooming. For an image in , the corresponding images in are obtained by following stochastic 1/ process in frequency domain and selfsimilarity in space domain. If is close to the surface in , we can expect that this upsampling will result in roughness in surface of . As we add details to blades of grass, fur of cat or outline of clouds, while obeying the characteristics of natural images, those properties will manifest in terms of mathematical properties of the surfaces of the manifolds. See Figure 12. Note that here we are zooming into manifolds in a dimensional sense where the underlying dimension of spaces becomes bigger from to . This is different from the standard 2D image zoom in which space remains .
Finally, fix the dimension and make the object change its pose. If the object is complex with several degrees of freedom in its transformations, that will reveal in twists and turns of the manifold.
Now we will use some mathematical concepts to make the above intuition rigorous.
Minkowski  Bouligand dimension of a set is defined as
by computing number of number of boxes of side length to cover and can be viewed as a way to compute fractal dimension of . This can be interpreted as
where is fractal dimension. Consider an ball of radius and let volume of the set contained inside B be and let . Then, we can interpret the above as
where is expansion dimension. Therefore, using two different radii and , one can compute as,
One can generalize it to probability distributions and make it local at a point
, see [2, 15]. One considers probability distribution of distances of points from in local neighborhood of and takes the probability mass as analogous to volume. Then can be used as measure of Local Intrinsic Dimensionality (LID), see [2, 15].If the surface of the image manifolds is more complex than interior, then we will expect LID to be higher on the surface. It has bee empirically determined in [15] that LID of points near surface where adversarial examples exist is significantly higher than for points inside the manifold
where , for some perturbation . This empirically shows that statistics of natural images leads to complexity of surfaces of image manifolds, see Figure 13. If the dimension also happens to be fractional, then it will indicate roughness in fractal sense. Significant increase in dimension means geometrically some fundamental changes are occurring on the surface. As we move from to , the manifold starts topologically filling up the space. It is very likely that LID goes through fractional values in this transition which will make one generalize manifolds to arbitrary fractal sets. Even if LID was always integral, surfaces are geometrically complex. Exact characterization of surfaces of image manifolds remains open problem.
Theorems 3 and 10 indicate that we have to worry about surfaces of manifolds as almost everything is close to surface and therefore the root cause of adversarial examples seems to be the complexity of surfaces. Theorems 3 and 10 in this paper along with empirical results in [15] explain why adversarial examples exist if the complex surfaces can not be carved out by deep neural networks. We state our proposition now for future work in deep learning, which is selfevident.
Proposition 5.
A machine learning system which can carve out complex nondifferentiable manifolds in highdimensional spaces approximating very closely the ground truth manifolds will rarely suffer from adversarial examples. If trained manifold is identical to ground truth manifold, then there will be no adversarial examples.
An adversarial example indicates failure of network to generalize and therefore reducing generalization error to 0 eliminates them by definition. Why are we not able to do accurately approximate surfaces with deep networks despite their great success? Deep networks do perform piecewise linear approximation of functions much better than shallow networks due to their depth, see [17]
, but this can be improved further. Theoretically, one can not exactly carve out nowhere differentiable manifolds by finitely many neurons. Even if were to consider very close approximation, the surfaces of the manifolds are still too complex and piecewise linear approximation by neural networks with ReLU activation of sizes which are practically feasible at present is not sufficient as evidenced by adversarial examples. Choice of ReLU as activation function is not important in this observation and the problem will persist irrespective of the choice.
In deep networks, we have following difficulties if we want to overcome the problem of adversarial examples.

First problem we face is that of explosion of size of training data in order to include all possible poses and variations of objects. Very large training datasets will be needed by the present day deep learning.

Even if we can overcome the practical difficulty of getting training data covering all possible poses and variations, to carve out an accurate manifold, we need very large deep networks as the convolutional and max pooling layers provide only limited translation invariance. This is an inherent inefficiency in deep networks in approximating ground truth manifolds [21].

We employ discriminative approaches rather than generative approaches for classification. Therefore, the goal is to separate out classes based on the training data rather than understanding their poses. There is no concept of poses and other generative parameters.

Though deep learning makes use of hierarchical nature of natural images and learns features from low level to high level, there is no explicit way of implementing partswhole hierarchy.
It makes us suspect that adversarial examples may be result of above shortcomings of present day deep networks.
5.3. PartsWhole Manifold Learning Hypothesis
How can we potentially do better? In Section 4, we discussed the Manifold Learning Hypothesis. We generalize the hypothesis, inspired by deep learning and by recently proposed Capsule Networks [21], to include another important feature of natural images, which is their hierarchical nature, in which a scene is made of objects, which are made of parts, and which are made of subparts, and so forth. Furthermore, different types of objects can share similar parts. This hierarchical nature of visual scenes manifests itself in the manifold structure. We need a better way to implement pose invariance and for that we need to be able to learn geometric constraints between subparts, parts and objects in a datadriven manner. Not only the image manifolds are embedded in lowdimensional subspaces, there is an inherent structure in them which must be algorithmically utilized to make neurons more powerful than neurons in conventional deep networks. This enables us to deal with complex manifolds. Image manifolds are complex but this complexity can be managed effectively through the use of hierarchical interrelationships between manifolds. We call this PartsWhole Manifold Learning Hypothesis.
With capsule networks and an iterative voting algorithm, we can achieve greater pose invariance by exploiting partswhole relationships in a more robust manner, see [21], which is a promising step in improving deep learning further. The discussion in this section is inspired by this very recent work by Sabour et al.
Approach like in capsule networks should potentially allow us to work in less number of dimensions. Small parts can be detected using deep networks of low dimensionality. Besides detection, these neurons are trained to perform prediction of pose parameters. These parts can be put together in a whole object more efficiently which incorporates voting based geometric verification of pose of the whole object by its parts. This allows us to work in dimensions which is closer to the theoretical subspace dimensionality as per Manifold Learning Hypothesis. See Figure 14 to get intuition behind how this approach allows us to carve out complex manifolds.
Consider the example of an image of a cat which has a background of trees, sky, clouds, a house and other objects. Where does it exist as a point in cat image manifold and how can we find that point? Consider a generative model as in computer graphics which maps model parameters of each object to rendering of the object in the scene. As these parameters change continuously, you traverse on image manifold of that object, see Figure 8.
For a cat, consider its parts such as eyes, ears, and mouth. Each part is an image manifold of small dimensionality. Detection of each part and its pose parameters identifies a particular point in its manifold. Next higher layer of neural network is trained to map this point to a particular pose of the cat, which is a point on the cat manifold. When points on parts manifolds vote for the same neighborhood in the cat manifold, we can fuse all those evidences to have a combined evidence on the presence and pose of a cat. It can be implemented as an iterative algorithm in which the probability of detecting a cat is refined as one retains only those parts which vote consistently towards one pose and prunes away the spurious and inconsistent ones.
See Figure 15 for illustration of this approach which can be considered as a conceptual manifold formalization of ideas behind capsule networks [21]. The parameters for parts are mapped to the parameters of the cat by the neural network,
All the parts vote for a consistent pose of the cat and therefore there is redundancy between them. If there is significant redundancy and mutual information in terms of entropy, we will have
This keeps the dimensionality of the whole objects in check though it will increase gradually as their complexity increases. This efficient approach in keeping dimensionality low in every layer of the neural network can be considered as very effective compression of the scene and as combining generative approach with discriminative approach. The generative model ensures that pose parameters are semantically meaningful. Low dimensionality assures us that we are getting close to dimensionality of underlying image manifolds.
Even separate objects which have consistent poses statistically can all work together in agreeing on correct scene interpretation. For example, in a traffic scene for selfdriving cars, different objects can consistently vote for the pose of the road.
Objects can be totally unrelated with no mutual information and in that case dimensions will just add. To have both house and cat, the embedded subspace dimensionality will be
which renders the joint scene of cat with background of house. Note that parameters will also include extra scene parameters, such as viewpoint and lighting, to generatively create a 2D image from the 3D world, which may be same for cat and house. So even for seemingly unrelated objects there may be some redundancy as they are part of the same scene.
For future improvements, we should be able to infer these generative parameters and mappings of points from one manifold to another manifold for complex scenes. This can be done using both explicit and implicit approaches.

We train neurons for hierarchy of parts in which the training data has explicit ground truth for poses of parts and objects. Neurons have regression loss function for these pose parameters, along with discriminative loss and generative loss.

Ground truth is simpler without detailed annotations of parts and relationships. We estimate potential number of parts and pose parameters and design the architecture of the network accordingly. We let the network learn these implicitly through a combination of discriminative loss and generative loss as is done in capsule networks, see [21].
Such networks will have following strengths.

The need for getting training data for all poses and variations gets restricted to smallest parts which have low dimensionality and therefore it is practically feasible.

Initial neuron layer needs to predict pose parameters of only fundamental parts from regions of raw images and therefore the number of parameters remain in check once again due to low dimensionality. For higher layers, neurons have to map points from one manifold to another and with ingenious work in future hopefully it will be practically feasible as we keep dimensionality in check at each layer.

The neural network has generative component in form of pose parameters defined either explicitly or implicitly.

The neural network is better implementation of partswhole manifold learning hypothesis. Therefore, it learns semantically meaningful highlevel abstractions rather than superficial and coarse geometries of manifolds.
Successful outcome of the above work should eliminate adversarial examples. If human vision does not suffer from adversarial examples, computer vision should not either.
6. Conclusion
In AI community, there has been debate about deep learning. Need for greater rigor and understanding of deep learning has been emphasized. In this paper, we have presented results in the direction of building this rigor and understanding. We have shown how nature of high dimensional spaces explains the working of deep neural networks. We have pointed out fallacy in an argument in a paper published in prior literature which explained adversarial examples. We rigorously explain adversarial examples using properties of image manifolds in high dimensional spaces. We have presented several novel mathematical results explaining adversarial examples, local minima, optimization landscape, image manifolds and properties of natural images. Our mathematical results show that we have to worry about the surfaces of image manifolds as almost everything is close to surface in high dimensions and the root solution of adversarial examples lies in handling the complexity of these surfaces. Exact characterization of surfaces of image manifolds is a topic of further research, and it seems that local dimension goes through a continuum of values including fractional values indicating roughness and space filling properties of surfaces. We also discussed how deep learning can make progress in future. Though high dimensions pose a challenge, we can solve these challenges using novel ways of exploiting characteristics of natural images which will eliminate adversarial examples and overcome the current shortcomings of deep neural networks.
References
 [1] Naveed Akhtar and Ajmal Mian, Threat of adversarial attacks on deep learning in computer vision: a survey, https://arxiv.org/abs/1801.00553v1 (2018).
 [2] Laurent Amsaleg, James Bailey, Dominique Barbe, Sarah Erfani, Michael E. Houle, Vinh Nguyen, and Milos Radovanovic, The vulnerability of learning to adversarial perturbation increases with ? intrinsic dimensionality, Proc. of IEEE WIFS (2017).
 [3] A. J. Bray and D. S. Dean, Statistics of critical points of gaussian fields on largedimensional spaces, Physics Review Letter, 98, 150201 (2007).
 [4] Anna Choromanska, Mikael Henaff, Michael Mathieu, G rard Ben Arous and Yann LeCun, The loss surfaces of multilayer networks, Proc. of 18th Intnl. Conf. on AI and Statistics, San Diego (2015), JMLR: W & CP, Volume 18, https://arxiv.org/abs/1412.0233 (2014).

[5]
David S. Dean and Satya N. Majumdar,
Large deviations of extreme eigenvalues of random matrices
, Phys.Rev.Lett. 97, 160201, https://doi.org/10.1103/PhysRevLett.97.160201 (2006).  [6] JeanPierre Dedieu and Gregorio Malajovich, On the number of minima of a random polynomial, Journal of Complexity, Volume 24, Issue 2 (April), Pages 89108, https://doi.org/10.1016/j.jco.2007.09.003 (2008).
 [7] Yann N. Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli and Yoshua Bengio, Identifying and attacking the saddle point problem in highdimensional nonconvex optimization, https://arxiv.org/pdf/1406.2572.pdf (2014).
 [8] Herbert Federer, Geometric measure theory, SpringerVerlag (1969).
 [9] Charles Fefferman, Sanjoy Mitter and Hariharan Narayanan, Testing the manifold hypothesis, Journal of the American Mathematical Society, Volume 29, Number 4, October, Pages 9831049, http://dx.doi.org/10.1090/jams/852 (2016).
 [10] Ian J. Goodfellow, Jonathon Shlens and Christian Szegedy, Explaining and harnessing adversarial examples, Proc. of ICLR 2015, https://arxiv.org/pdf/1412.6572.pdf (2015).
 [11] Matthias Hein and Maksym Andriushchenko, Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation, arxiv preprint arXiv:1705.08475 (2017).
 [12] Jason Jo and Yoshua Bengio, Measuring the tendency of CNNs to learn surface statistical regularities, arXiv preprint arXiv:1711.11561 (2017).
 [13] Alexey Kurakin, Ian J. Goodfellow and Samy Bengio, Adversarial examples in the physical world, Proc. of ICLR 2017, https://arxiv.org/pdf/1607.02533.pdf (2017).
 [14] Qianli Liao and Tomaso Poggio, Theory II: Landscape of the empirical risk in deep learning, https://arxiv.org/abs/1703.09833v2 (2017).
 [15] Xingjun Ma, Bo Li, Yisen Wang, Sarah M. Erfani, Sudanthi Wijewickrema, Michael E. Houle, Grant Schoenebeck, Dawn Song, and James Bailey, Characterizing adversarial subspaces using local intrinsic dimensionality, https://arxiv.org/abs/1801.02613v1 (2018).
 [16] Manifold learning theory and applications, CRC Press, Boca Raton, FL. Edited by Yunqian Ma and Yun Fu (2018).
 [17] Guido Mont far, Razvan Pascanu, Kyunghyun Cho and Yoshua Bengio, On the number of linear regions of deep neural networks, Proc. of NIPS 2014, Pages 29242932 (2014).
 [18] A. Nguyen, J.Yosinski and J. Clune, Deep neural networks are easily fooled: High confidence predictions for unrecognizable images, Proc. of CVPR 2015, https://arxiv.org/pdf/1412.1897.pdf (2015).
 [19] Quynh Nguyen and Matthias Hein, The loss surface of deep and wide neural networks, https://arxiv.org/abs/1704.08045 (2017).
 [20] Tomaso Poggio, Kenji Kawaguchi, Qianli Liao, Brando Miranda, Lorenzo Rosasco, Xavier Boix, Jack Hidary and Hrushikesh Mhaskar, Theory of deep learning III: explaining the nonoverfitting puzzle, https://arxiv.org/abs/1801.00173v1 (2017).
 [21] Sara Sabour, Nicholas Frosst and Geoffrey E Hinton, Dynamic routing between capsules, https://arxiv.org/abs/1710.09829 (2017).
 [22] Levent Sagun, Leon Bottou and Yann LeCun, Eigenvalues and the Hessian in deep learning: Singularity and beyond, https://openreview.net/pdf?id=B186cP9gx (2017).
 [23] A. van der Schaafa and J.H. van Haterena, Modelling the power spectra of natural images: Statistics and information, Vision Research, Volume 36, Issue 17 (September), Pages 27592770 (1996).
 [24] A. Srivastava, A. Lee, E. Simoncelli and S.C. Zhu, On advances in statistical modeling of natural images, Journal of Mathematical Imaging and Vision 18: 17. https://doi.org/10.1023/A:1021889010444 (2003).
 [25] M. Stone, The generalized Weierstrass approximation theorem, Mathematics Magazine 21 (21), 167184 and 21 (5), 237254 (1948).
 [26] Gilbert Strang and Truong Nguyen, Wavelets and filter banks, WellesleyCambridge Press, 2nd edition (1996).
 [27] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow and Rob Fergus, Intriguing properties of neural networks, Proc. of ICLR 2014, http: //arxiv.org/abs/1312.6199 (2014).
 [28] Thomas Tanay and Lewis Griffin, A boundary tilting persepective on the phenomenon of adversarial examples, https://arxiv.org/abs/1608.07690 (2016).
 [29] M. B. Wakin, D. L. Donoho, H. Choi, and R. G. Baraniuk, The multiscale structure of nondifferentiable image manifolds, In SPIE Optics and Photonics, volume 5914, pages 413  429 (2005).
Comments
There are no comments yet.