1 Introduction
Deep learning at scale has led to breakthroughs on important problems in computer vision (Krizhevsky et al. (2012)
), natural language processing (
Wu et al. (2016)), and robotics (Levine et al. (2015)). Shortly thereafter, the interesting phenomena of adversarial examples was observed. A seemingly ubiquitous property of machine learning models where perturbations of the input that are imperceptible to humans reliably lead to confident incorrect classifications (Szegedy et al. (2013), Goodfellow et al. (2014)). What has ensued is a standard story from the security literature: a game of cat and mouse where defenses are proposed only to be quickly defeated by stronger attacks (Athalye et al. (2018)). This has led researchers to develop methods which are provably robust under specific attack models (Wong and Kolter (2018), Sinha et al. (2018), Raghunathan et al. (2018), Mirman et al. (2018)) as well as emperically strong heuristics (
Madry et al. (2018)). As machine learning proliferates into society, including securitycritical settings like health care (Esteva et al. (2017)) or autonomous vehicles (Codevilla et al. (2018)), it is crucial to develop methods that allow us to understand the vulnerability of our models and design appropriate countermeasures.In this paper, we propose a geometric framework for analyzing the phenomenon of adversarial examples. We leverage the observation that datasets encountered in practice exhibit lowdimensional structure despite being embedded in very highdimensional input spaces. This property is colloquially referred to as the “Manifold Hypothesis”: the idea that lowdimensional structure of ‘real’ data leads to tractable learning. We model data as being sampled from classspecific lowdimensional manifolds embedded in a highdimensional space. We consider a threat model where an adversary may choose
any point on the data manifold to perturb by in order to fool a classifier. In order to be robust to such an adversary, a classifier must be correct everywhere in an tube around the data manifold. Observe that, even though the data manifold is a lowdimensional object, this tube has the same dimension as the entire space the manifold is embedded in. Our analysis argues that adversarial examples are a natural consequence of learning a decision boundary that classifies all points on a lowdimensional data manifold correctly, but classifies many points near the manifold incorrectly. The high codimension, the difference between the dimension of the data manifold and the dimension of the embedding space, is a key source of the pervasiveness of adversarial examples.Our paper makes the following contributions. First, we develop a geometric framework, inspired by the manifold reconstruction literature, that formalizes the manifold hypothesis described above and our attack model. Second, we highlight the role codimension plays in vulnerability to adversarial examples. As the codimension increases, there are an increasing number of directions off the data manifold in which to construct adversarial perturbations. Prior work has attributed vulnerability to adversarial examples to input dimension (Gilmer et al. (2018), Shafahi et al. (2019)). Third, we apply this framework to analyze the standard approach to adversarial training. We define a theoretical model of adversarial training (see Definition 1), which guarantees correctness in the balls centered on training data, and prove that is insufficient to learn robust decision boundaries with realistic amounts of data. We show that nearest neighbor classifiers do not suffer from this insufficiency, due to geometric properties of their decision boundary away from data. Fourth we propose a modification to the standard paradigm of adversarial training. We replace the ball constraint with the Voronoi cells of the training data, which have several advantages detailed in Section 5. In particular, we need not set the maximum perturbation size as part of the training procedure. In Section 6 we show that adversarial training with Voronoi constraints gives stateoftheart robustness results on MNIST and competitive results on CIFAR10.
2 Related Work
2.1 Adversarial Examples
Some previous work has considered the relationships between adversarial examples and high dimensional geometry. Franceschi et al. (2018) explore the robustness of classifiers to random noise in terms of distance to the decision boundary, under the assumption that the decision boundary is locally flat. The work of Gilmer et al. (2018) experimentally evaluated the setting of two concentric undersampled spheres embedded in , and concluded that adversarial examples occur on the data manifold. In contrast, we present a geometric framework for proving robustness guarantees for learning algorithms, that makes no assumptions on the decision boundary. We carefully sample the data manifold in order to highlight the importance of codimension; adversarial examples exist even when the manifold is perfectly classified. Additionally we explore the importance of the spacing between the constituent data manifolds, sampling requirements for learning algorithms, and the relationship between model complexity and robustness.
Wang et al. (2018) explore the robustness of nearest neighbor classifiers to adversarial examples. In the setting where the Bayes optimal classifier is uncertain about the true label of each point, they show that nearest neighbors is not robust if is a small constant. They also show that if , then nearest neighbors is robust. Using our geometric framework we show a complementary result: in the setting where each point is certain of its label, nearest neighbors is robust to adversarial examples.
The decision and medial axes defined in Section 3 are maximum margin decision boundaries. Hard margin SVMs define define a linear separator with maximum margin, maximum distance from the training data (Cortes and Vapnik (1995)). Kernel methods allow for maximum margin decision boundaries that are nonlinear by using additional features to project the data into a higherdimensional feature space (ShaweTaylor and Cristianini (2004)). The decision and medial axes generalize the notion of maximum margin to account for the arbitrary curvature of the data manifolds. There have been attempts to incorporate maximum margins into deep learning (Sun et al. (2016), Liu et al. (2016), Liang et al. (2017), Elsayed et al. (2018)
), often by designing loss functions that encourage large margins at either the output (
Sun et al. (2016)) or at any layer (Elsayed et al. (2018)). In contrast, the decision axis is defined on the input space and we use it as an analysis tool for proving guarantees.2.2 Manifold Reconstruction
Manifold reconstruction is the problem of discovering the structure of a dimensional manifold embedded in , given only a set of points sampled from the manifold. A large vein of research in manifold reconstruction develops algorithms that are provably good: if the points sampled from the underlying manifold are sufficiently dense, these algorithms are guaranteed to produce a geometrically accurate representation of the unknown manifold with the correct topology. The output of these algorithms is often a simplicial complex, a set of simplices such as triangles, tetrahedra, and higherdimensional variants, that approximate the unknown manifold. In particular these algorithms output subsets of the Delaunay triangulation, which along with their geometric dual the Voronoi diagram, have properties that aid in proving geometric and topological guarantees (Edelsbrunner and Shah (1997)).
The field first focused on curve reconstruction in (Amenta et al. (1998)) and subsequently in (Dey and Kumar (1999)). Soon after algorithms were developed for surface reconstruction in , both in the noisefree setting (Amenta and Bern (1999), Amenta et al. (2002)) and in the presence of noise (Dey and Goswami (2004)). We borrow heavily from the analysis tools of these early works, including the medial axis and the reach. However we emphasize that we have adapted these tools to the learning setting. To the best of our knowledge, our work is the first to consider the medial axis under different norms.
In higherdimensional embedding spaces (large ), manifold reconstruction algorithms face the curse of dimensionality. In particular, the Delaunay triangulation, which forms the bedrock of algorithms in lowdimensions, of vertices in can have up to
simplices. To circumvent the curse of dimensionality, algorithms were proposed that compute subsets of the Delaunay triangulation restricted to the
dimensional tangent spaces of the manifold at each sample point (Boissonnat and Ghosh (2014)). Unfortunately, progress on higherdimensional manifolds has been limited due to the presence of socalled “sliver” simplices, poorly shaped simplices that cause inconsistences between the local triangulations constructed in each tangent space (Cheng et al. (2005), Boissonnat and Ghosh (2014)). Techniques that provably remove sliver simplices have prohibitive sampling requirements (Cheng et al. (2000), Boissonnat and Ghosh (2014)). Even in the special case of surfaces () embedded in high dimensions (), algorithms with practical sampling requirements have only recently been proposed (Khoury and Shewchuk (2016)). Our use of tubular neighborhoods as a tool for analysis is borrowed from Dey et al. (2005) and Khoury and Shewchuk (2016).In this paper we are interested in learning robust decision boundaries, not reconstructing the underlying data manifolds, and so we avoid the use of Delaunay triangulations and their difficulties entirely. In Section 4 we present robustness guarantees for two learning algorithms in terms of a sampling condition on the underlying manifold. These sampling requirements scale with the dimension of the underlying manifold , not with the dimension of the embedding space .
3 The Geometry of Data
We model data as being sampled from a set of lowdimensional manifolds (with or without boundary) embedded in a highdimensional space . We use to denote the dimension of a manifold . The special case of a manifold is called a curve, and a manifold is a surface. The codimension of is , the difference between the dimension of the manifold and the dimension of the embedding space. The “Manifold Hypothesis” is the observation that in practice, data is often sampled from manifolds, usually of high codimension.
In this paper we are primarily interested in the classification problem. Thus we model data as being sampled from class manifolds , one for each class. When we wish to refer to the entire space from which a dataset is sampled, we refer to the data manifold . We often work with a finite sample of points, , and we write . Each sample point has an accompanying class label indicating which manifold the point is sampled from.
Consider a ball centered at some point and imagine growing by increasing its radius starting from zero. For nearly all starting points , the ball eventually intersects one, and only one, of the ’s. Thus the nearest point to on , in the norm , lies on .
The decision axis of is the set of points such that the boundary of intersects two or more of the , but the interior of does not intersect at all. In other words, the decision axis is the set of points that have two or more closest points, in the norm , on distinct class manifolds. See Figure 1. The decision axis is inspired by the medial axis, which was first proposed by Blum (1967) in the context of image analysis and subsequently modified for the purposes of curve and surface reconstruction by Amenta et al. (1998; 2002). We have modified the definition to account for multiple class manifolds and have renamed our variant in order to avoid confusion in the future.
The decision axis can intuitively be thought of as a decision boundary that is optimal in the following sense. First, separates the class manifolds when they do not intersect. Second, each point of is as far away from the class manifolds as possible in the norm . As shown in the leftmost example in Figure 1, in the case of two linearly separable circles of equal radius, the decision axis is exactly the line that separates the data with maximum margin. For arbitrary manifolds, generalizes the notion of maximum margin to account for the curvature of the class manifolds.
Let be any set. The reach of is defined as . When is compact, the reach is achieved by the point on that is closest to under the norm. We will drop from the notation when it is understood from context.
Finally, an tubular neighborhood of is defined as . That is, is the set of all points whose distance to under the metric induced by is less than . Note that while is dimensional, is always dimensional. Tubular neighborhoods are how we rigorously define adversarial examples. Consider a classifier for . An adversarial example is a point such that . A classifier is robust to all adversarial examples when correctly classifies not only , but all of . In this paper we will be primarily concerned with exploring the conditions under which we can provably learn a decision boundary that correctly classifies . When , the decision axis is one decision boundary that correctly classifies . Throughout the remainder of the paper we will drop the in from the notation, instead writing ; the norm will always be clear from context.
4 Limitations of Adversarial Training
Adversarial training, the process of training on adversarial examples generated in balls around the training data, is a very natural approach to constructing robust models (Goodfellow et al. (2014), Madry et al. (2018)). In our notation this corresponds to training on samples drawn from for some . Despite its simplicity, adversarial training has proven to be one of the most successful approaches to training robust deep networks. While natural, we show that there are simple settings where this approach is much less sampleefficient than other classification algorithms, if the only guarantee is correctness in .
Definition 1 (Adversarial Training).
Let be a finite training set. Define an adversarial training algorithm as a learning algorithm that, given , outputs a model such that for every with label , and every , . Here denotes the ball centered at of radius in the norm.
is our theoretical model of the standard approach to adversarial training (Goodfellow et al. (2014), Madry et al. (2018)). In words, learns a model that outputs the same label for any perturbation of up to as it outputs for . We will use to analyze the limitations of the standard approach to adversarial training; in particular we will show that is much less sample efficient at learning a robust decision boundary than other classification algorithms.
Theorem 1.
There exists a classification algorithm that, for a particular choice of , correctly classifies using exponentially fewer samples than are required for to correctly classify .
The reason for the sample inefficiency of is the use of the balls centered on the data to propagate the labels. As we will show below, the union of the balls around the data covers a vanishingly small fraction of in high codimension settings. Thus the adversary is restricted to constructing adversarial examples in a negligible fraction of the neighborhood around the data manifold. In contrast, other algorithms, such as nearest neighbor classifiers, propagate labels using different geometric regions, such as the Voronoi cells which we will define in Section 5. The main takeaway of this paper is that the use of balls centered on the data leads to suboptimal results both in theory and, as we will show in Section 6, in practice.
Theorem 1 follows from Theorems 2 and 3. In Theorems 2 and 3 we will prove that a nearest neighbor classifier is one such classification algorithm. Nearest neighbor classifiers are naturally robust in high codimensions because the Voronoi cells of are elongated in the directions normal to when is dense (Dey (2007)).
Before we state Theorem 2 we must introduce a sampling condition on . A cover of a manifold in the norm is a finite set of points such that for every there exists such that . Theorem 2 gives a sufficient sampling condition for to correctly classify for all manifolds . Theorem 2 also provides a sufficient sampling condition for a nearest neighbor classifier to correctly classify , which is substantially less dense than that of . Thus different classification algorithms have different sampling requirements in high codimensions.
Theorem 2.
Let be a dimensional manifold and let for any . Let be a nearest neighbor classifier and let be the output of a learning algorithm as described above. Let denote the training sets for and respectively. We have the following sampling guarantees:

If is a cover for then correctly classifies .

If is a cover for then correctly classifies .
The bounds on in Theorem 2 are sufficient, but they are not always necessary. There exist manifolds where the bounds in Theorem 2 are pessimistic, and less dense samples corresponding to larger values of would suffice.
Next we will show a setting where bounds on similar to those in Theorem 2 are necessary. In this setting, the difference of a factor of in between the sampling requirements of and leads to an exponential gap between the sizes of and necessary to achieve identical robustness.
Define ; that is is a subset of the plane bounded between the coordinates . Similarly define . Note that lies in the subspace ; thus , where is the decision axis of . In the norm we can show that the gap in Theorem 2 is necessary for . Furthermore the bounds we derive for covers for for both and are tight. Combined with wellknown properties of covers, we get that the ratio is exponential in .
Theorem 3.
Let as described above. Let be minimum training sets necessary to guarantee that and correctly classify . Then we have that
(1) 
We have shown that both and nearest neighbor classifiers learn robust decision boundaries when provided sufficiently dense samples of . However there are settings where nearest neighbors is exponentially more sampleefficient than in achieving the same amount of robustness.
To shed light on why the ballbased learning algorithm is so much less sampleefficient than nearest neighbor classifiers, we show that the volume is often a vanishingly small percentage of . For our theoretical model this means that a vanishingly small fraction of is guaranteed to have the correct label, and in practice this means that the adversary in adversarial training does not have the freedom to generate adversarial examples in the entirety of . For the remainder of this section we will consider the norm.
Theorem 4.
In high codimension, even moderate undersampling of leads to a significant loss of coverage of because the volume of the union of balls centered at the samples shrinks faster than the volume of . Theorem 4 states that in high codimensions the fraction of covered by goes to . Almost nothing is covered by for training set sizes that are realistic in practice. Thus is a poor model of , and high classification accuracy on does not imply high accuracy in .
Approaches that produce robust classifiers by generating adversarial examples in the balls centered on the training set do not accurately model , and it will take many more samples to do so. If the method behaves arbitrarily outside of the balls that define , adversarial examples will still exist and it will likely be easy to find them. The reason deep learning has performed so well on a variety of tasks, in spite of the brittleness made apparent by adversarial examples, is because it is much easier to perform well on than it is to perform well on .
5 Adversarial Training with Voronoi Constraints
Madry et al. (2018) formalize adversarial training by introducing the robust objective
(3) 
where is the data distribution and is a ball centered at with radius . Their main contribution was the use of a strong adversary which used projected gradient descent to solve the inner optimization problem.
In Sections 4, we showed that adversarial training formalized using the geometric constraint is sample inefficient, because the adversary is restricted to a negligible fraction of the tubular neighborhood around the data distribution. To remedy this we replace the ball constraint with a different geometric constraint, namely the Voronoi cell at . That is, we formalize the adversarial training objective as
(4) 
where
(5) 
In words, the Voronoi cell of is the set of all points in that are closer to than to any other sample in .
The Voronoi cell constraint has many advantages over the ball constraint. First the Voronoi cells partition the entirety of and so the interiors of Voronoi cells generated by samples from different classes do not intersect. This is in contrast to balls which may intersect for sufficiently large and cause problems for optimization. In particular the Voronoi cells partition and, for dense samples, are elongated in the directions normal to the data manifold. Thus the Voronoi cells are well suited for high codimension settings. Second, the size of the Voronoi cells adapts to the data distribution. A Voronoi cell generated by a sample which is close to samples from a different class manifold is smaller, while those further away are larger. Thus we do not need to set a value for in the optimization procedure. The constraint naturally adapts to the largest value of possible locally on the data manifold. Third, the Voronoi cells enjoy the sample efficiency of in Theorem 2, because the Voronoi cells define the nearest neighbor decision boundary. In summary, the Voronoi constraint gives the adversary the freedom to explore the entirety of the tubular neighborhood around .
At each iteration we must solve the inner optimization problem
(6)  
subject to 
When the Voronoi cells are convex and so we can project a point onto a Voronoi cell by solving a quadratic program. Thus we can solve Problem 6 using projected gradient descent, as in Madry et al. (2018). When the Voronoi cells are not necessarily convex. In this setting there are many approaches, such as barrier and penalty methods, one might employ to approximately solve Problem 6 (Boyd and Vandenberghe (2004)). However we found that the following heuristic is both fast and works well in practice. At each iteration of the outer training loop, for each training sample in a batch, we generate adversarial examples by taking iterative steps in the direction of the gradient starting from . However instead of projecting onto a constraint after each iterative step, we instead check if any of the Voronoi constraints of shown in Equation 5 are violated. If no constraint is violated we perform the iterative update, otherwise we simply stop performing updates for .
Problem 6 has constraints, one for each sample in . In practice however very few samples contribute to the Voronoi cell of . At each iteration, we perform a nearest neighbor search query to find the nearest samples to in each other class. That is we search for samples where is the number of classes. We do not impose constraints from samples in the same class as ; there is no benefit to restricting the adversary’s movement with the tubular neighborhood around the class manifold of . In our experiments we set .
6 Experiments
Datasets. To investigate how the codimension of a dataset influences robustness we introduce two synthetic datasets, Circles and Planes, which allow us to carefully vary the codimension while maintaining dense samples. The Circles dataset consists of two concentric circles in the plane, the first with radius and the second with radius , so that . We densely sample random points on each circle for both the training and the test sets. The Planes dataset consists of two dimensional planes, the first in the and the second in , so that . The first two axis of both planes are bounded as , while . We sample the training set at the vertices of a regular grid with side length , and the test set at the centers of the grid cubes. We also evaluate on MNIST and CIFAR10.
Models.
Our controlled experiments on synthetic data consider a fully connected network with 1 hidden layer, 100 hidden units, and ReLU activations. We set the learning rate for Adam (
Kingma and Ba (2015)) as . Our experimental results are averaged over 20 retrainings. For a fair comparison, our experiments on MNIST and CIFAR10 use the same model architectures as in Madry et al. (2018). We train the MNIST model using Adam for 100 epochs and the CIFAR10 model using SGD for 250 epochs.
Attacks. We consider two attacks, the fast gradient sign method (FGSM) (Goodfellow et al. (2014)) and the basic iterative method (BIM) (Kurakin et al. (2016)). We use the implementations provided in the cleverhans library Papernot et al. (2018)
Accuracy measures. We plot the minimum classification accuracy across our suite of attacks as a function of , for each of our datasets. Additionally we report the normalized area under the curve (NAUC) defined as
(7) 
where measures the classification accuracy and is the largest perturbation considered. Note that NAUC with higher values corresponding to more robust models.
6.1 High Codimension Reduces Robustness
6.2 Adversarial Training in High Codimensions
Figure 2 (Top Right, Bottom Left, Bottom Right) explores the use of adversarial training to improve robustness in high codimension settings for the Planes dataset. As shown in Figure 2 (Top Right), as the codimension increases the adversarial training approach of Madry et al. (2018) becomes less robust. This is because the balls around cover a smaller fraction of the tubular neighborhood around , as predicted by Theorem 4. In Appendix B.1 we show that even significantly increasing the sampling density does not notably improve robustness in high codimensions.
Replacing the ball constraint with the Voronoi cells improves robustness in high codimension settings, on average. In codimension 10 (Figure 2 (Bottom Left)), our approach achieves NAUC of , while Madry’s approach achieves NAUC of . In codimension 500 (Figure 2 (Bottom Right)), our approach achieves NAUC of , while Madry’s approach achieves NAUC of .
6.3 MNIST and CIFAR10
To explore the performances of adversarial training with Voronoi constraints on more realistic datasets, we evaluate on MNIST and CIFAR10 and compare against the robust pretrained models of Madry et al. (2018).
Figure 3 (Left) shows that our model maintains near identical robustness to the Madry model on MNIST up to , after which our model significantly outperforms the Madry model. The Madry model was explicitly trained for perturbations. We emphasize that one advantage of our approach is that we did not need to set a value for the maximum perturbation size . The Voronoi cells adapt to the maximum size allowable locally on the data distribution. Our model maintains accuracy at compared to accuracy for the Madry model. Furthermore our model achieves NAUC of , while the Madry model achieves NAUC of , an improvement of over the baseline. To our knowledge, this is the most robust MNIST model to attacks.
Figure 3 (Right) shows the results of our approach on CIFAR10. Both our model and the Madry model achieve NAUC of . However our approach trades natural accuracy for increased robustness against larger perturbations. This tradeoff is wellknown and explored in Tsipras et al. (2019).
A natural approach to improving the robustness of models produced by the adversarial training paradigm of Madry et al. (2018) is to simply increase the maximum allowable perturbation size of the norm ball constraint. As shown in Figure 4, increasing the size of to , from the with which Madry et al. (2018) originally trained, and training for only epochs produces a model which exhibits significantly worse robustness in the range than the pretrained model. If we increase the number of training epochs to , the approach of Madry et al. (2018) with produces a model with improved robustness in the range , but that still exhibits the sharp drop in accuracy after . Additionally the model trained with for epochs performs worse than both the pretrained model and our model in the range . Our model achieves NAUC , while the model trained with for epochs achieves NAUC . We emphasize that our approach does not require us to set , which is particularly important in practice where the maximum amount of robustness achievable may not be known apriori.
7 Conclusions
The ball constraint for describing adversarial perturbations has been a productive formalization for designing robust deep networks. However, the use of balls has significant drawbacks in highcodimension settings and leads to suboptimal results in practice. Adversarial training with Voronoi constraints improves robustness by giving the adversary the freedom to explore and generate adversarial examples close to .
References
 Amenta and Bern (1999) N. Amenta and M. W. Bern. Surface reconstruction by voronoi filtering. Discrete & Computational Geometry, 1999.
 Amenta et al. (1998) N. Amenta, M. W. Bern, and D. Eppstein. The crust and the betaskeleton: Combinatorial curve reconstruction. Graphical Models and Image Processing, 1998.
 Amenta et al. (2002) N. Amenta, S. Choi, T. K. Dey, and N. Leekha. A simple algorithm for homeomorphic surface reconstruction. International Journal of Computational Geometry and Applications, 2002.
 Athalye et al. (2018) A. Athalye, N. Carlini, and D. A. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In ICML, 2018.
 Blum (1967) H. Blum. A transformation for extracting new descriptors of shape. Models for Perception of Speech and Visual Forms, 1967.
 Boissonnat and Ghosh (2014) J. Boissonnat and A. Ghosh. Manifold reconstruction using tangential delaunay complexes. Discrete & Computational Geometry, 51, 2014.
 Boyd and Vandenberghe (2004) S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
 Cheng et al. (2000) S. Cheng, T. K. Dey, H. Edelsbrunner, M. A. Facello, and S. Teng. Sliver exudation. Journal of the ACM, 47, 2000.
 Cheng et al. (2005) S. Cheng, T. K. Dey, and E. A. Ramos. Manifold reconstruction from point samples. In Proceedings of the Symposium on Discrete Algorithms (SODA), 2005.

Codevilla et al. (2018)
F. Codevilla, M. Müller, A. Dosovitskiy, A. López, and V. Koltun.
Endtoend driving via conditional imitation learning.
In ICRA, 2018. 
Cortes and Vapnik (1995)
C. Cortes and V. Vapnik.
Supportvector networks.
Machine Learning, 20, 1995.  Dey (2007) T. K. Dey. Curve and Surface Reconstruction: Algorithms with Mathematical Analysis. Cambridge University Press, 2007.
 Dey and Goswami (2004) T. K. Dey and S. Goswami. Provable surface reconstruction from noisy samples. In Proceedings of the Symposium on Computational Geometry (SoCG), 2004.
 Dey and Kumar (1999) T. K. Dey and P. Kumar. A simple provable algorithm for curve reconstruction. In Proceedings of the Symposium on Discrete Algorithms (SODA), 1999.
 Dey et al. (2005) T. K. Dey, J. Giesen, E. A. Ramos, and B. Sadri. Critical points of the distance to an epsilonsampling of a surface and flowcomplexbased surface reconstruction. In Proceedings of the Symposium on Computational Geometry (SoCG), 2005.
 Edelsbrunner and Shah (1997) H. Edelsbrunner and N. R. Shah. Triangulating Topological Spaces. International Journal of Computational Geometry and Applications, Aug. 1997.
 Elsayed et al. (2018) G. F. Elsayed, D. Krishnan, H. Mobahi, K. Regan, and S. Bengio. Large margin deep networks for classification. CoRR, abs/1803.05598, 2018. URL http://arxiv.org/abs/1803.05598.

Esteva et al. (2017)
A. Esteva, B. Kuprel, R. A. Novoa, J. Ko, S. M. Swetter, H. M. Blau, and
S. Thrun.
Dermatologistlevel classification of skin cancer with deep neural networks.
Nature, 2017.  Franceschi et al. (2018) J. Franceschi, A. Fawzi, and O. Fawzi. Robustness of classifiers to uniform lp and gaussian noise. In AISTATS, 2018.
 Gilmer et al. (2018) J. Gilmer, L. Metz, F. Faghri, S. S. Schoenholz, M. Raghu, M. Wattenberg, and I. J. Goodfellow. Adversarial spheres. CoRR, abs/1801.02774, 2018. URL http://arxiv.org/abs/1801.02774.
 Goodfellow et al. (2014) I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2014.
 Khoury and Shewchuk (2016) M. Khoury and J. R. Shewchuk. Fixed points of the restricted delaunay triangulation operator. In Proceedings of the Symposium on Computational Geometry (SoCG), 2016.
 Kingma and Ba (2015) D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
 Krizhevsky et al. (2012) A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
 Kurakin et al. (2016) A. Kurakin, I. Goodfellow, and S. Bengio. Adversarial examples in the physical world. In ICLR Workshop Track, 2016.
 Levine et al. (2015) S. Levine, N. Wagener, and P. Abbeel. Learning contactrich manipulation skills with guided policy search. In ICRA, 2015.
 Liang et al. (2017) X. Liang, X. Wang, Z. Lei, S. Liao, and S. Z. Li. Softmargin softmax for deep classification. In ICONIP, 2017.
 Liu et al. (2016) W. Liu, Y. Wen, Z. Yu, and M. Yang. Largemargin softmax loss for convolutional neural networks. In ICML, 2016.
 Madry et al. (2018) A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018.
 Mirman et al. (2018) M. Mirman, T. Gehr, and M. T. Vechev. Differentiable abstract interpretation for provably robust neural networks. In ICML, 2018.
 Papernot et al. (2018) N. Papernot, F. Faghri, N. Carlini, I. Goodfellow, R. Feinman, A. Kurakin, C. Xie, Y. Sharma, T. Brown, A. Roy, A. Matyasko, V. Behzadan, K. Hambardzumyan, Z. Zhang, Y.L. Juang, Z. Li, R. Sheatsley, A. Garg, J. Uesato, W. Gierke, Y. Dong, D. Berthelot, P. Hendricks, J. Rauber, and R. Long. Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768, 2018.
 Raghunathan et al. (2018) A. Raghunathan, J. Steinhardt, and P. Liang. Certified defenses against adversarial examples. In ICLR, 2018.
 Shafahi et al. (2019) A. Shafahi, W. R. Huang, C. Studer, S. Feizi, and T. Goldstein. Are adversarial examples inevitable? In ICLR, 2019.
 ShaweTaylor and Cristianini (2004) J. ShaweTaylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
 Sinha et al. (2018) A. Sinha, H. Namkoong, and J. Duchi. Certifying some distributional robustness with principled adversarial training. In ICLR, 2018.
 Sun et al. (2016) S. Sun, W. Chen, L. Wang, X. Liu, and T.Y. Liu. On the depth of deep neural networks: A theoretical view. In AAAI, 2016.
 Szegedy et al. (2013) C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013. URL http://arxiv.org/abs/1312.6199.

Tsipras et al. (2019)
D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry.
Robustness may be at odds with accuracy.
In ICLR, 2019.  Wang et al. (2018) Y. Wang, S. Jha, and K. Chaudhuri. Analyzing the robustness of nearest neighbors to adversarial examples. In ICML, 2018.
 Wong and Kolter (2018) E. Wong and J. Z. Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML, 2018.
 Wu et al. (2016) Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, J. Klingner, A. Shah, M. Johnson, X. Liu, L. Kaiser, S. Gouws, Y. Kato, T. Kudo, H. Kazawa, K. Stevens, G. Kurian, N. Patil, W. Wang, C. Young, J. Smith, J. Riesa, A. Rudnick, O. Vinyals, G. Corrado, M. Hughes, and J. Dean. Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144, 2016. URL http://arxiv.org/abs/1609.08144.
Appendix A Omitted Proofs
a.1 Proof of Theorem 2
Proof.
Here we use to denote the metric induced by the norm. We begin by proving (1). Let be any point in . Suppose without loss of generality that for some class . The distance from to any other data manifold , and thus any sample on , is lower bounded by . See Figure 5. It is then both necessary and sufficient that there exists a such that for . (Necessary since a properly placed sample on can achieve the lower bound on .) The distance from to the nearest sample on is for some . The question is how large can we allow to be and still guarantee that correctly classifies ? We need
which implies that . It follows that a cover with is sufficient, and in some cases necessary, to guarantee that correctly classifies .
Next we prove (2). As before let . It is both necessary and sufficient for for some sample to guarantee that , by definition of . The distance to the nearest sample on is for some . Thus it suffices that . ∎
a.2 Proof of Theorem 3
Proof.
Let . Since is flat, the distance from to the nearest sample is bounded as . For we need that , and so it suffices that . In this setting, this is also necessary; should be any larger a property placed sample on can claim in its Voronoi cell.
Similarly for we need that , and so it suffices that . In this setting, this is also necessary; should be any larger, lies outside of every ball and so is free to learn a decision boundary that misclassifies .
Let denote the size of the minimum cover of . Since is flat (has no curvature) and since the intersection of with a ball centered at a point on is a ball, a standard volume argument can be applied in the affine subspace to conclude that . So we have
∎
a.3 Proof of Theorem 4
Proof.
Assuming the balls centered on the samples in are disjoint we get the upper bound
(8) 
The medial axis of is defined as the closure of the set of all points in that have two or more closest points on in the norm . The medial axis is similar to the decision axis , except that the nearest points do not need to be on distinct class manifolds. For , we have the lower bound
(9) 
Appendix B Additional Experiments
b.1 Increasing Sampling Density
The Planes dataset is sampled so that the trianing set is a cover of the underlying planes, which requires 450 sample points. Figure 6 shows the results of increasing the sampling density to a cover (1682 samples) and a cover (6498 samples). In lowcodimension, increasing the sampling density improves the robustness of adversarial training. However, in highcodimension, even a substantial increase in the number of samples gives a only a small improvement in robustness.
Comments
There are no comments yet.