It is now well established through the vast literature on adversarial attacks [7, 8, 17, 19, 20, 21, 29, 31, 33, 36] (we can only cite a small subset here) on neural networks for image classification that deep learning provides highly successful, yet incredibly unstable neural networks for classification problems. Moreover, recently, the instability phenomenon has also been shown  for deep learning in image reconstruction and inverse problems [1, 13, 15, 18, 30, 32, 40]. Thus, this phenomenon of instability seems to be universal. What is fascinating is that despite intense research trying to solve the instability issue [6, 10, 14, 16, 23, 37], the problem is still open. As a result, there is a growing concern regarding the consequences of the instabilities of deep learning methods in the sciences. Indeed, Science  recently reported on researchers warning about potential fatal consequences in medicine due to the instabilities. Hence, we are left with the following fundamental question:
Why are current deep learning methods so successful in image classification, yet universally unstable and, hence, vulnerable to adversarial attacks?
In this paper we provide a radical conjecture answering the above question in classification and explaining why this problem will not be solved unless there is a fundamental rethink of how to approach learning. We provide the first steps towards such a theory.
Conjecture 1.1 (False structures in classification).
The current training process in deep learning for classification forces the neural network to learn a different (false) structure and not the actual structure of the classification problem. There are three main components:
(Success) The false structure correlates well with the original structure, hence one gets a high success rate.
(Instability) The false structure is unstable, and thus the network is susceptible to adversarial attacks.
(Simplicity) The false structure is simpler than the desired structure, and hence easier to learn e.g. fewer data is needed and the numerical algorithm used in the training easily converges to the neural network that captures the false structure.
Remark 1.2 (Structure).
One can think of the word structure to mean the concept of what would describe a classification problem. Considering an image, we could think of what makes humans recognise a cat or a fire truck. In particular, a structure describing a cat would encompass all the features that make humans recognise cats. However, classification problems extend beyond image recognition. For example, one may want to classify sound patterns or patters in meteorological data, seismic data etc. Thus, we need a proper mathematical definition of what we mean by structure and also false structure.
1.1 Smale’s 18th problem
Based on a request from V. Arnold, inspired by Hilbert’s list of mathematical problems for the 20th century, S. Smale created a list of mathematical problems for the 21st century . The last problem on this list is more relevant than ever and echoes Turing’s paper from 1950  on the question of artificial intelligence. Turing asks if a computer can think, and suggests the imitation game as a test for his question about artificial intelligence. Smale takes the question even further and asks in his 18th problem the following:
“ What are the limits of intelligence, both artificial and human?”
— Smale’s 18th problem (from mathematical problems for the 21st century ).
Smale formulates the question in connection to foundations of computational mathematics and discusses different models of computation appropriate for the problem [5, 34]. The results in this paper should be viewed in connection with Smale’s 18th problem and the foundations of computation (see §2.2). Our contribution is part of a larger program on foundations of computational mathematics and the Solvability Complexity Index hierarchy [3, 4, 11, 12, 22, 25] established to determine the boundaries of computational mathematics and scientific computing.
2 Why the concept of false structures is needed
To illustrate the concept of false structure and the conjecture we continue with a thought experiment.
Example 2.1 (A thought experiment explaining false structures and Conjecture 1.1).
Suppose a human is put in a room in a foreign country with an unknown language. The person is to be trained to recognise the label ”fire truck”, and in order to do so, she is given a large complex collection of images with different types of fire trucks and images without fire trucks. Each time an image with a fire truck is shown to the person, she hears the foreign language word for fire truck. In particular, the person is trying to learn the function , where means that there is a fire truck in the image, and denotes a large set of images. We will refer to as the original structure. However, on the set of images shown in the training process there is a small, but clearly visible, horizontal blue line on each of the images showing a fire truck (this is visualised in Figure 1). On the images without fire trucks there is no line. The question is: will the person think that the foreign language word for fire truck means horizontal blue line, blue line, line or actual fire truck. This is an example of the original structure (describing fire trucks) and three false structures (horizontal blue line), (blue line, not necessary horizontal), and (line, colour and geometry are irrelevant). All of them could have been learned from the same data. However, the structure or false structure that the person has learned will yield wildly different results on different tests:
Suppose the person learns the false structure describing the horizontal blue-line structure that is chosen. Suppose also that the test set of images is chosen such that every image containing a fire truck also has a horizontal blue line. On this test set one will have success, yet the false structure will give incredible instabilities in several different ways. First, a tiny perturbation in terms of removing the blue line will yield a miss classification. Second, a slight rotation of the blue line would mean wrong output, and a slight change in colour will result in an incorrect decision. Thus, there are at least three types of adversarial attacks that would succeed in reducing the success rate from to on the test set.
Suppose it is just the line structure described by the false structure that is learned. Repeating the experiment in (i) with all images in the test set containing a fire truck also having a small visible green line would yield the same success as in (i) as well as instabilities, however, now rotations of the line would not have an effect nor changes in the colour. Thus, two less adversarial attacks would be successful. If we did the experiment with the false structure , there would have been at least two forms of adversarial attacks available.
If the actual fire truck structure, described by , was chosen, one would be as successful and stable as would be expected from a human when given any test set. Note that the original structure is much more complex and likely harder to learn than the false structures .
Motivated by the above thought experiment we can now formally define the original structure and false structures.
Definition 2.2 (The original structure and false structures).
Consider and a string of unique (see Remark 2.3) predicates on , where such that for each there is a unique such that . For such define . We say that the pair is the original structure on . A false structure for relative to is a pair , where a string of unique predicates with such that iff . Moreover, for all and
as well as
We say that is a partial false structure if for at least two different (as opposed to all).
Remark 2.3 (Unique predicates).
By unique predicates
we mean that the support of the characteristic functions induced by the predicates indo not intersect.
Remark 2.4 (How bad is the false structure?).
Note that Definition 2.2 does not consider how ’far’ the false structure is from the original structure . This is beyond the scope of this paper, however, this can easily be done. For example, suppose
is equipped with some probability measure, then assuming from (2) is measurable, would indicate how severe it would be to learn the false structure instead of the original structure.
The motivation behind the idea of a false structure can be understood as follows. Suppose one is interested in learning the original structure as in Definition 2.2. In Example 2.1 the list of predicates are , and In order to learn we have a training set . However, if we have a false structure relative to , as in Example 2.1, where with
how do we know that we have not learned instead? In Example 2.1 there are three different false structures, each with its different instability issues.
Remark 2.5 (Formulation of the predicate).
By ” demonstrates a ” (where was a horizontal blue line above) in the previous predicate we mean that the main object showing in is , and that there is only one main object. The word demonstrate is slightly ambiguous, however, for simplicity we use this formulation.
Example 2.1 illustrates the issues in Conjecture 1.1 very simply. Indeed, a simple false structure could give great performance yet incredible instabilities. Let us continue with the thought experiment, however, now we will replace the human in Example 2.1 with a machine, and in particular, we consider the deep learning technique.
We consider the same problem as in Example 2.1, however, we replace the human by a neural network that we shall train. Indeed, we let be the function deciding if there is a fire truck in the image, where is as in Example 2.1. The training set and test set consists of images and with and without fire trucks. However, all fire truck images also contain a small blue horizontal line, and there is no blue line in the images without fire trucks. We choose a cost function , a class of neural networks and approximate the optimisation problem
Why should we think that the trained neural network has picked up the correct structure, and not any of the the false structures?
Remark 2.7 (Simplicity).
2.1 Support for the conjecture and how to establish it
Unlike common conjectures in mathematics, Conjecture 1.1 can never be proven with standard mathematical tools. The issue is that all neural networks that are created are done so with inaccurate computations. Thus, actual minimisers are rarely found, if ever, but rather approximations in one form or another. Thus Conjecture 1.1 should be treated more like a postulate in theoretical physics, like ’the speed of light is constant’. One can never establish this with a mathematical proof, however, mathematical theory and experiments can help support the postulate.
Note that there is already an overwhelming amount of numerical evidence that Conjecture 1.1 is true based on the myriad of experiments done over the last five years. Indeed, we have the following documented cases: (I) Unrecognizable and bizarre structures are labeled as natural images . Trained successful neural nets classify unrecognizable and bizarre structures as natural images with standard labels with high confidence. Such mistakes would not be possible if the neural network actually captured the correct structure that allows for image recognition in the human brain. (II) Perturbing one pixel changes the label [31, 36]. It has been verified that trained and successful networks change the label of the classification even when only one pixel is perturbed. Clearly, the structures in an image that allows for recognition by humans are not affected by a change in a single pixel. (III) Universal invisible perturbations change more than of the labels [19, 20]. The DeepFool software  demonstrates how a single almost invisible perturbation to the whole test set dramatically changes the failure rate. Different structures in images, allowing for successful human recognition, are clearly not susceptible to misclassification by a single near-invisible perturbation. However, the false structure learned through training is clearly unstable.
There is quite a bit of work on establishing which part of the data is crucial for the decision of the classifier [24, 26, 27, 38, 39]. This is a rather different program compared to establishing our conjecture. Indeed, our conjecture is about the unstable false structures. However, one should not rule out that there might be connections that could help detecting and understanding the false structures.
2.2 Consequences of Conjecture 1.1
The correctness of Conjecture 1.1 may have several consequences both negative and positive.
The success of deep learning in classification is not due to networks learning the structures that humans associate with image recognition, but rather that the network picks up unstable false structures in images that are potentially impossible for humans to detect. This means that instability, and hence vulnerability to adversarial attacks, can never be removed until one guarantees that no false structure is learned. This means a potential complete overhaul of modern AI.
The success is dependent of the simple yet unstable structures, thus the AI does not capture the intelligence of a human.
Since one does not know which structure the network picks up, it becomes hard to conclude what the neural network actually learns, and thus harder to trust its prediction. What if the false structure gives wrong predictions? Positive consequences:
Deep learning captures structures that humans cannot detect, and these structures require very little data and computing power in comparison to the true original structures, however, they generalise rather well compared to the original structure. Thus, from an efficiency point of view, the human brain may be a complete overkill for certain classification problems, and deep learning finds a mysterious effective way of classifying.
The structure learned by deep learning may have information that the human may not capture. This structure could be useful if characterised properly. For example, what if there is structural information in the data that allows for accurate prediction that the original structure could not do? Consequences - Smale’s 18th problem:
Conjecture 1.1 suggests that there is a fundamental difference between state-of-the-art AI and human intelligence as neural networks based on deep learning learn completely different structures compared to what humans learn. Hence, in view of Smale’s 18th problem, correctness of Conjecture 1.1 implies both limitations of AI as well as human intelligence. Indeed, the false unstable structures learned by modern AI limits its abilities to match human intelligence regarding stability. However, in view of the positive consequences mentioned above, correctness of Conjecture 1.1 implies that there is a limitation to human intelligence when it comes to detecting other structures that may provide different information than the structure detected by humans.
3 Establishing Conjecture 1.1 - Do false structures exist in practice?
Our starting point for establishing Conjecture 1.1 is Theorem 3.1 below, for which the proof captures all the three components of the conjecture. We will demonstrate how this happens in actual computations. To introduce some notation, we let , with denote the set of all -layer neural networks. That is, all mappings of the form
with where the s are affine maps with dimensions given by , and
is a fixed non-linear function acting component-wise on a vector. We consider a binary classifierwhere is some subset. To make sure that we consider stable problems we define the family of well separated and stable sets with separation at least :
Moreover, the cost function used in the training is in
Theorem 3.1 (Bastounis, Hansen, Vlacic ).
There is an uncountable family of classification functions such that for each and neural network dimensions with , any , and any integers with , there exist uncountably many non-intersecting training sets of size (where ) and uncountably many non-intersecting classification sets of size such that we have the following. For every there is a neural net
such that However, there exist uncountably many such that
Moreover, there exists a stable neural network
where denotes the neighbourhood, in the norm, of .
The successful neural network learns a false structures that correlates well with the true structure of the problem and hence the great success (100% success on an arbitrarily large test set).
The false structure is completely unstable despite the original problem being stable (see in (4)). Because of the training process of seeking a minimiser of the optimisation problem, the neural network learns the false structure, and hence becomes completely unstable. Indeed, it will fail on uncountably many instances that are -away from the training set. Moreover, can be made arbitrarily small.
The false structure is very simple and easy to learn. Moreover, paradoxically, there exists another neural network with different dimensions that becomes stable and has the same success rate as the unstable network, however, there is no known way to construct this network. Theorem 3.1 and its proof provides the starting point in the program on establishing Conjecture 1.1. However, the missing part is to show that: the false structure is learned in practice, it is much easier to learn than the original structure, and that the original structure is very unlikely to be learned in the presence of the false structure in the training set. This is done in the next section.
3.1 Establishing the conjecture: Case 1
for and . Consider the predicates
Let then is the original structure, as in Definition 2.2. We note that is constant on each of the intervals where and hence may be viewed as a very simple classification problem: given , is or ? To simplify the learning and analysis further we will assume that , for some chosen , and we let . This means that has jump discontinuities on the interval . To ensure is stable with respect to perturbations of size on its input, we will ensure that each of our samples of lies at least away from each of these jump discontinuities. Hence, we define
choose the samples from the set . This is similar to (4) used in Theorem 3.1. To avoid that is a union of empty sets we will always assume that . It will not be a goal in itself to learn for any , but rather to learn the right value of within the -stable region . Indeed, given that this is a decision problem, inputs close to the boundary are always going to be hard to classify. However, the decision problem stays stable on .
For the learning task we now consider two sets of size ,
where . Note that gives rise to a false structure as the next proposition shows.
Consider the predicates and on defined by
Define by when and when . Let . Then is a false structure for relative to .
Note that for small , becomes unstable on . Moreover, the false structure appears much simpler than the original structure . In order to train a neural network to learn the original structure we choose the set of networks (architecture) to be . That is all fully connected 2-layer networks with dimensions and non-linear function
being the ReLU function. What is crucial is that the setis rich enough to predict the value of . By prediction we here mean that for a network the value of , agrees with , where ,
is the sigmoid function, andmeans rounding to nearest integer. For the function class above, it is indeed, possible to find such a network whose prediction agrees with on the stable area in (10). This is formalised in the following statement.
Proposition 3.3 (Existence of stable and accurate network).
Let , be the sigmoid function, and let be the cross entropy cost function for binary classification, that is
where denotes rounding to the closest integer, and for any subset , and we have .
In particular, Proposition 3.3 states, similarly to the last statement in Theorem 3.1, that there is a stable and accurate network approximating that provides arbitrarily small values of the cost function . The problem, however, as we will see in the next experiment, is that it is very unlikely to be found it in the presence of the false structure.
3.1.1 Experiment I
The experiment is done as follows. We fixed , , and and trained four neural networks with and ReLU activation function. The networks , were trained on the sets , , and , respectively. For the set , we ensured that the first components were located in separate intervals of . Otherwise it would be infeasible to learn from samples. For the other sets we distributed the first components approximately equally between the disjoint intervals of . To investigate which structure the networks had learned, we define the two sets,
We have plotted and for in and . The results are displayed in Figure 2 and give the following conclusions. If a network has learned the false structure , then should agree with on , while it should be all zero on . On the other hand if the network has not learned , then it should have the same output on both and . If the network has learned , it should agree with on both and .
Conclusion (Exp I): , trained on ( samples) learns the false structure . , trained on , does not learn the original structure nor the false structure . , trained on , learns the false structure . , trained on , learns the original structure . Note that the conclusion supports Conjecture 1.1: The false structure for relative to is unstable and simple and is learned with only samples. It also gives fantastic success on . The original structure is difficult to learn ( samples are needed to succeed, yet samples are too few). Moreover, when training on , it is impossible to learn , even with an excessive amount of samples, and the false structure is always learned. In particular, despite Proposition 3.3 assuring that the architecture chosen is rich enough to include stable networks that approximate well, and should be found when the cost function is small, the good network is not found in nearly all cases.
3.2 Establishing the conjecture: Case 2
Consider to be the collection of grey scale images with a -pixel wide either horizontal or vertical light stripe on a dark background as shown in Figure 3. The colour code is as follows: is black and is white. Hence, numbers between and yield variations of black and numbers between and give variations of white. Thus, there are slight differences in the black and white colours, however, they are typically not visible to the human eye.
Define the original structure on with , where
and note that the original structure is very robust to any small perturbations.
3.2.1 Experiment II
The experiment is inspired by an example from , and is done as follows. Define the two sets
and notice that both are non-intersecting subsets of . Next let and , each contain 1000 elements from their respective sets and where the value of . Each element of and , is chosen with an equal probability of being a vertical or horizontal stripe, and the value of
is chosen from the uniform distribution on.
We have trained a neural network on a training set which contains exactly the unique elements in for which . The network has a success rate on the set , , , yet its success rate on is . As is evident from Figure 3, misclassifies images that look exactly the same as the ones that it successfully classifies. Hence, it is completely unstable and has clearly not learned the original structure . Thus, a pertinent question is:
What is the false structure that has learned?
To answer the question we begin with the following proposition.
Consider the predicates and on defined by
and let Let when , and when . Then is a false structure for relative to .
Note that, contrary to the original structure , that considers the geometry of the problem, the false structure is clearly completely unstable on . Indeed, a tiny perturbation in the pixel values will change the label. The question is whether it is this false structure that is actually learned by the network . This turns out to be a rather delicate question. Indeed, the fact that we get success rate on , and success rate on suggests that the false structure that learns is . However, by choosing smaller values of and , we see from Table 1 that the success rate of on decreases, whereas the success rate of increases. This implies that this is not entirely the case; if had learned the false structure it should have success rate on and 0% success rate on for all . This example illustrates how delicate the task of determining exactly the false structure actually is, even on the simplest examples. The actual false structure learned by is likely not too far from , but making this statement mathematically rigorous, as well as a full test is beyond the scope of this paper.
Conclusion (Exp II): The above numerical examples suggest that learns a false structure, however, it may not always be from Proposition 3.4. It should also be noted that if the experiments are done with larger test sets and the conclusion stays the same. The experiments support Conjecture 1.1 as the false structures are simple to learn (only samples needed) and completely unstable. Indeed, tiny perturbations make the network change its label. Moreover, the network that learned the false structures become successful on large test sets.
4 Final conclusion
The correctness of Conjecture 1.1 may have far reaching consequences on how we understand modern AI, and in particular on how to get to the heart of the problem of universal instability throughout neural networks based on deep learning. The conjecture is inspired by Theorem 3.1 and its proof in addition to the many numerical examples demonstrating instabilities in deep learning and suggesting learning of false structures. This paper provides the foundations for a larger program to establish the conjecture fully. However, as we have demonstrated in this paper, Conjecture 1.1 appears to be true even in the simplest cases.
-  J. Adler and O. Öktem. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Problems, 33(12):124007, 2017.
-  V. Antun, F. Renna, C. Poon, B. Adcock, and A. C. Hansen. On instabilities of deep learning in image reconstruction – Does AI come at a cost? arXiv:1902.05300, 2019.
A. Bastounis, A. C. Hansen, and V. Vlacic.
On computational barriers and paradoxes in estimation, regularisation, learning and computer assisted proofs.Preprint, 2019.
-  J. Ben-Artzi, A. C. Hansen, O. Nevanlinna, and M. Seidel. New barriers in complexity theory: On the solvability complexity index and the towers of algorithms. Comptes Rendus Mathematique, 353(10):931 – 936, 2015.
-  L. Blum, F. Cucker, M. Shub, and S. Smale. Complexity and Real Computation. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 1998.
-  E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
-  K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song. Robust physical-world attacks on deep learning visual classification. , pages 1625–1634, 2018.
-  A. Fawzi, S. Moosavi-Dezfooli, and P. Frossard. The robustness of deep networks: A geometrical perspective. IEEE Signal Processing Magazine, 34:50–62, 2017.
S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S.
Adversarial attacks on medical machine learning.Science, 363(6433):1287–1289, 2019.
-  I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. International conference on learning representations, 2015.
-  A. C. Hansen. On the approximation of spectra of linear operators on hilbert spaces. J. Funct. Anal., 254(8):2092 – 2126, 2008.
-  A. C. Hansen. On the solvability complexity index, the -pseudospectrum and approximations of spectra of operators. J. Amer. Math. Soc., 24(1):81–124, 2011.
K. H. Jin, M. T. McCann, E. Froustey, and M. Unser.
Deep convolutional neural network for inverse problems in imaging.IEEE Transactions on Image Processing, 26(9):4509–4522, 2017.
-  J. Lu, T. Issaranon, and D. Forsyth. Safetynet: Detecting and rejecting adversarial examples robustly. In IEEE International Conference on Computer Vision, pages 446–454, 2017.
-  A. Lucas, M. Iliadis, R. Molina, and A. K. Katsaggelos. Using deep neural networks for inverse problems in imaging: beyond analytical methods. IEEE Signal Processing Magazine, 35(1):20–36, 2018.
-  X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, G. Schoenebeck, M. E. Houle, D. Song, and J. Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. In International Conference on Learning Representations, 2018.
-  A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.
-  M. T. McCann, K. H. Jin, and M. Unser. Convolutional neural networks for inverse problems in imaging: A review. IEEE Signal Processing Magazine, 34(6):85–95, Nov 2017.
-  S. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In IEEE Conference on computer vision and pattern recognition, pages 86–94, 07 2017.
-  S. M. Moosavi Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
-  A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. IEEE Conference on Computer Vision and Pattern Recognition, pages 427–436, 2015.
-  P. Odifreddi. Classical Recursion Theory (Volume I). North–Holland Publishing Co., Amsterdam, 1989.
-  N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security and Privacy, pages 582–597. IEEE, 2016.
-  M. T. Ribeiro, S. Singh, and C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144. ACM, 2016.
-  H. Rogers, Jr. Theory of recursive functions and effective computability. MIT Press, Cambridge, MA, USA, 1987.
-  R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, pages 618–626, 2017.
-  K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. International conference on learning representations, 2013.
-  S. Smale. Mathematical problems for the next century. Mathematical Intelligencer, 20:7–15, 1998.
-  D. Song, K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, F. Tramer, A. Prakash, and T. Kohno. Physical adversarial examples for object detectors. In 12th USENIX Workshop on Offensive Technologies (WOOT 18), 2018.
-  R. Strack. AI transforms image reconstruction. Nature Methods, 15:309, 04 2018.
J. Su, D. V. Vargas, and K. Sakurai.
One pixel attack for fooling deep neural networks.
IEEE Transactions on Evolutionary Computation, 2019.
-  J. Sun, H. Li, Z. Xu, et al. Deep ADMM-Net for compressive sensing MRI. In Advances in Neural Information Processing Systems, pages 10–18, 2016.
-  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In International conference on learning representations, 2014.
-  A. M. Turing. On Computable Numbers, with an Application to the Entscheidungsproblem. Proc. London Math. Soc., S2-42(1):230, 1936.
-  A. M. Turing. I.-Computing machinery and intelligence. Mind, LIX(236):433–460, 1950.
-  D. V. Vargas and J. Su. Understanding the one-pixel attack: Propagation maps and locality analysis. arXiv preprint arXiv:1902.02947, 2019.
-  H. Wang and C.-N. Yu. A direct approach to robust deep learning using adversarial networks. In International Conference on Learning Representations, 2019.
-  M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. European conference on computer vision, pages 818–833, 2014.
B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba.
Learning deep features for discriminative localization.In IEEE Conference on Computer Vision and Pattern Recognition, pages 2921–2929, 2016.
-  B. Zhu, J. Z. Liu, S. F. Cauley, B. R. Rosen, and M. S. Rosen. Image reconstruction by domain-transform manifold learning. Nature, 555(7697):487, 03 2018.
5.1.1 Proof of Proposition 3.2
Proof of Proposition 3.2.
First, we notice that the predicates in are unique, i.e. is either or not and hence the classification is unambiguous. Next, we want to see that on . We have that for all . Hence for the first case that we get when , which means that since . The other case follows from the same argument. Last, we see that is non-empty. Let , and choose such that for . Such an will always exists since . Then and . ∎
5.1.2 Proof of Proposition 3.3
Proof of Proposition 3.3.
Let , and let
where and , is the ReLU activation function. We start by noticing that is a two layered neural network lying in with , where the coefficients in front of are all zero. Hence the support of equals . We also have that for and that the range of is . Next let
be a sum of smaller networks with non-overlapping support in the first variable. We note that the coefficients in front of some of the functions will be zero, and hence could have been removed, but we will not bother to do so. We notice that with , where the inclusion holds since .
Let and notice that if is odd, then , and if is even, then . Hence we conclude that , with , is a neural network such that for all .
Next let be a constant so that and let be a constant so that . Furthermore let . Then then
is a neural network in , . Finally notice that for any we have . Hence if , then and if then . letting , we readily see that
moreover, by the same argument as above we see that for all . ∎
5.1.3 Proof of Proposition 3.4
Proof of Proposition 3.4.
The proof is similar to the proof of Proposition 3.2. We again start with the uniqueness of . It is clear that the number of pixels is either larger than or smaller or equal. Hence, is well defined by the predicates and . Next, we recognise that for images in the images with a vertical line sum up to and for the horizontal lines they sum up to . Hence, for all all images we have that those with horizontal lines have values and for vertical lines and therefore coincide with the evaluation of the structure . Last, we see that in non-empty. All images in with vertical lines have that the pixel sum is . Therefore, they also get classified as having a horizontal line, which is then different to the classification by . ∎
5.2 Description of training procedures
In this section we intend to describe the training procedure in detail, so that all the experiments becomes reproducible. A complete overview of the code, and the weights of the trained networks described in this paper can be downloaded from https://github.com/vegarant/false_structures. Before we start we would also like to point out that in each of the experiments there is an inherit randomness due to random initialization of the network weights, and “on the fly” generation of some of the training and test sets. Hence, rerunning the code, might result in slightly different results. It is beyond the scope of this paper, to investigate how often, and under what circumstances a false structure is learned.
All the code have been implemented in Tensorflow version 1.13.1, and all layers have used Tensorflows default weight initializer, the so called “glorot_uniform” initializer. If not otherwise stated, the default options for other parameters is always assumed.
5.2.1 Experiment I
Network architecture. We considered two-layer neural networks with a ReLU activation function between the two layers. The output dimension from the first layer was set to , and the output dimension of the final layers was 1. An observant reader, who reads the proof of Proposition 3.3, might notice that it would be possible to decrease the output dimension of the first layer. In our initial tests, we did this, but doing so made it substantially harder to learn the true structure .
We trained the networks using the cross entropy loss function for binary classification as described in Proposition3.3
. All networks was trained using the ADAM optimizer, running 30000 epochs. The network trained on the set containing only 7 samples used a batch size of 7, the three other networks used a batch size of 50, with a shuffling of the data samples in each epoch.
Training data. The training sets considered are and , and is described in the main document. We point out that for each of the training steps, each of these sets where drawn at random. Hence there is some randomness in the experiment itself.
5.2.2 Experiment II
Network architecture. The trained network had the following architecture.
where the to 2D convolutional layers (Conv2D) had a kernel size of 5, strides equal
, padding equal “same”. The first of the convolutional layers had 24 filters, whereas the last had 48 filters. After each of the convolutional layers we used a ReLU activation function. For the max pooling layers a pool size ofwas used, with padding equal “same”. After the final max pool layer the output of the layer was reshaped into a vector and feed to a dense layer. The output dimension of the first dense layer was 10, whereas for the last it was 1.
Training parameters. We trained the networks using the cross entropy loss function for binary classification as described in Proposition 3.3. The network was trained using the ADAM optimizer for 10 epochs, with a batch size of 60. We ran the code a few times to capture the exact false structure as described in the main document, as it sometimes capture a slightly different false structure, which does not have exactly 0% success rate on , and .
Training data. The network was trained on the training set which contains exactly the 60 unique elements in for which .