Adversarial Training with Voronoi Constraints

05/02/2019 ∙ by Marc Khoury, et al. ∙ berkeley college 0

Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. We propose a geometric framework, drawing on tools from the manifold reconstruction literature, to analyze the high-dimensional geometry of adversarial examples. In particular, we highlight the importance of codimension: for low-dimensional data manifolds embedded in high-dimensional space there are many directions off the manifold in which an adversary could construct adversarial examples. Adversarial examples are a natural consequence of learning a decision boundary that classifies the low-dimensional data manifold well, but classifies points near the manifold incorrectly. Using our geometric framework we prove that adversarial training is sample inefficient, and show sufficient sampling conditions under which nearest neighbor classifiers and ball-based adversarial training are robust. Finally we introduce adversarial training with Voronoi constraints, which replaces the norm ball constraint with the Voronoi cell for each point in the training set. We show that adversarial training with Voronoi constraints produces robust models which significantly improve over the state-of-the-art on MNIST and are competitive on CIFAR-10.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning at scale has led to breakthroughs on important problems in computer vision (Krizhevsky et al. (2012)

), natural language processing (

Wu et al. (2016)), and robotics (Levine et al. (2015)). Shortly thereafter, the interesting phenomena of adversarial examples was observed. A seemingly ubiquitous property of machine learning models where perturbations of the input that are imperceptible to humans reliably lead to confident incorrect classifications (Szegedy et al. (2013), Goodfellow et al. (2014)). What has ensued is a standard story from the security literature: a game of cat and mouse where defenses are proposed only to be quickly defeated by stronger attacks (Athalye et al. (2018)). This has led researchers to develop methods which are provably robust under specific attack models (Wong and Kolter (2018), Sinha et al. (2018), Raghunathan et al. (2018), Mirman et al. (2018)

) as well as emperically strong heuristics (

Madry et al. (2018)). As machine learning proliferates into society, including security-critical settings like health care (Esteva et al. (2017)) or autonomous vehicles (Codevilla et al. (2018)), it is crucial to develop methods that allow us to understand the vulnerability of our models and design appropriate counter-measures.

In this paper, we propose a geometric framework for analyzing the phenomenon of adversarial examples. We leverage the observation that datasets encountered in practice exhibit low-dimensional structure despite being embedded in very high-dimensional input spaces. This property is colloquially referred to as the “Manifold Hypothesis”: the idea that low-dimensional structure of ‘real’ data leads to tractable learning. We model data as being sampled from class-specific low-dimensional manifolds embedded in a high-dimensional space. We consider a threat model where an adversary may choose

any point on the data manifold to perturb by in order to fool a classifier. In order to be robust to such an adversary, a classifier must be correct everywhere in an -tube around the data manifold. Observe that, even though the data manifold is a low-dimensional object, this tube has the same dimension as the entire space the manifold is embedded in. Our analysis argues that adversarial examples are a natural consequence of learning a decision boundary that classifies all points on a low-dimensional data manifold correctly, but classifies many points near the manifold incorrectly. The high codimension, the difference between the dimension of the data manifold and the dimension of the embedding space, is a key source of the pervasiveness of adversarial examples.

Our paper makes the following contributions. First, we develop a geometric framework, inspired by the manifold reconstruction literature, that formalizes the manifold hypothesis described above and our attack model. Second, we highlight the role codimension plays in vulnerability to adversarial examples. As the codimension increases, there are an increasing number of directions off the data manifold in which to construct adversarial perturbations. Prior work has attributed vulnerability to adversarial examples to input dimension (Gilmer et al. (2018), Shafahi et al. (2019)). Third, we apply this framework to analyze the standard approach to adversarial training. We define a theoretical model of adversarial training (see Definition 1), which guarantees correctness in the -balls centered on training data, and prove that is insufficient to learn robust decision boundaries with realistic amounts of data. We show that nearest neighbor classifiers do not suffer from this insufficiency, due to geometric properties of their decision boundary away from data. Fourth we propose a modification to the standard paradigm of adversarial training. We replace the -ball constraint with the Voronoi cells of the training data, which have several advantages detailed in Section 5. In particular, we need not set the maximum perturbation size as part of the training procedure. In Section 6 we show that adversarial training with Voronoi constraints gives state-of-the-art robustness results on MNIST and competitive results on CIFAR-10.

2 Related Work

2.1 Adversarial Examples

Some previous work has considered the relationships between adversarial examples and high dimensional geometry. Franceschi et al. (2018) explore the robustness of classifiers to random noise in terms of distance to the decision boundary, under the assumption that the decision boundary is locally flat. The work of Gilmer et al. (2018) experimentally evaluated the setting of two concentric under-sampled -spheres embedded in , and concluded that adversarial examples occur on the data manifold. In contrast, we present a geometric framework for proving robustness guarantees for learning algorithms, that makes no assumptions on the decision boundary. We carefully sample the data manifold in order to highlight the importance of codimension; adversarial examples exist even when the manifold is perfectly classified. Additionally we explore the importance of the spacing between the constituent data manifolds, sampling requirements for learning algorithms, and the relationship between model complexity and robustness.

Wang et al. (2018) explore the robustness of -nearest neighbor classifiers to adversarial examples. In the setting where the Bayes optimal classifier is uncertain about the true label of each point, they show that -nearest neighbors is not robust if is a small constant. They also show that if , then -nearest neighbors is robust. Using our geometric framework we show a complementary result: in the setting where each point is certain of its label, -nearest neighbors is robust to adversarial examples.

The decision and medial axes defined in Section 3 are maximum margin decision boundaries. Hard margin SVMs define define a linear separator with maximum margin, maximum distance from the training data (Cortes and Vapnik (1995)). Kernel methods allow for maximum margin decision boundaries that are non-linear by using additional features to project the data into a higher-dimensional feature space (Shawe-Taylor and Cristianini (2004)). The decision and medial axes generalize the notion of maximum margin to account for the arbitrary curvature of the data manifolds. There have been attempts to incorporate maximum margins into deep learning (Sun et al. (2016), Liu et al. (2016), Liang et al. (2017), Elsayed et al. (2018)

), often by designing loss functions that encourage large margins at either the output (

Sun et al. (2016)) or at any layer (Elsayed et al. (2018)). In contrast, the decision axis is defined on the input space and we use it as an analysis tool for proving guarantees.

2.2 Manifold Reconstruction

Manifold reconstruction is the problem of discovering the structure of a -dimensional manifold embedded in , given only a set of points sampled from the manifold. A large vein of research in manifold reconstruction develops algorithms that are provably good: if the points sampled from the underlying manifold are sufficiently dense, these algorithms are guaranteed to produce a geometrically accurate representation of the unknown manifold with the correct topology. The output of these algorithms is often a simplicial complex, a set of simplices such as triangles, tetrahedra, and higher-dimensional variants, that approximate the unknown manifold. In particular these algorithms output subsets of the Delaunay triangulation, which along with their geometric dual the Voronoi diagram, have properties that aid in proving geometric and topological guarantees (Edelsbrunner and Shah (1997)).

The field first focused on curve reconstruction in (Amenta et al. (1998)) and subsequently in (Dey and Kumar (1999)). Soon after algorithms were developed for surface reconstruction in , both in the noise-free setting (Amenta and Bern (1999), Amenta et al. (2002)) and in the presence of noise (Dey and Goswami (2004)). We borrow heavily from the analysis tools of these early works, including the medial axis and the reach. However we emphasize that we have adapted these tools to the learning setting. To the best of our knowledge, our work is the first to consider the medial axis under different norms.

In higher-dimensional embedding spaces (large ), manifold reconstruction algorithms face the curse of dimensionality. In particular, the Delaunay triangulation, which forms the bedrock of algorithms in low-dimensions, of vertices in can have up to

simplices. To circumvent the curse of dimensionality, algorithms were proposed that compute subsets of the Delaunay triangulation restricted to the

-dimensional tangent spaces of the manifold at each sample point (Boissonnat and Ghosh (2014)). Unfortunately, progress on higher-dimensional manifolds has been limited due to the presence of so-called “sliver” simplices, poorly shaped simplices that cause in-consistences between the local triangulations constructed in each tangent space (Cheng et al. (2005), Boissonnat and Ghosh (2014)). Techniques that provably remove sliver simplices have prohibitive sampling requirements (Cheng et al. (2000), Boissonnat and Ghosh (2014)). Even in the special case of surfaces () embedded in high dimensions (), algorithms with practical sampling requirements have only recently been proposed (Khoury and Shewchuk (2016)). Our use of tubular neighborhoods as a tool for analysis is borrowed from Dey et al. (2005) and Khoury and Shewchuk (2016).

In this paper we are interested in learning robust decision boundaries, not reconstructing the underlying data manifolds, and so we avoid the use of Delaunay triangulations and their difficulties entirely. In Section 4 we present robustness guarantees for two learning algorithms in terms of a sampling condition on the underlying manifold. These sampling requirements scale with the dimension of the underlying manifold , not with the dimension of the embedding space .

3 The Geometry of Data

We model data as being sampled from a set of low-dimensional manifolds (with or without boundary) embedded in a high-dimensional space . We use to denote the dimension of a manifold . The special case of a -manifold is called a curve, and a -manifold is a surface. The codimension of is , the difference between the dimension of the manifold and the dimension of the embedding space. The “Manifold Hypothesis” is the observation that in practice, data is often sampled from manifolds, usually of high codimension.

In this paper we are primarily interested in the classification problem. Thus we model data as being sampled from class manifolds , one for each class. When we wish to refer to the entire space from which a dataset is sampled, we refer to the data manifold . We often work with a finite sample of points, , and we write . Each sample point has an accompanying class label indicating which manifold the point is sampled from.

Consider a -ball centered at some point and imagine growing by increasing its radius starting from zero. For nearly all starting points , the ball eventually intersects one, and only one, of the ’s. Thus the nearest point to on , in the norm , lies on .

The decision axis of is the set of points such that the boundary of intersects two or more of the , but the interior of does not intersect at all. In other words, the decision axis is the set of points that have two or more closest points, in the norm , on distinct class manifolds. See Figure 1. The decision axis is inspired by the medial axis, which was first proposed by Blum (1967) in the context of image analysis and subsequently modified for the purposes of curve and surface reconstruction by Amenta et al. (1998; 2002). We have modified the definition to account for multiple class manifolds and have renamed our variant in order to avoid confusion in the future.

Figure 1: Examples of the decision axis , shown here in green, for different data manifolds. Intuitively, the decision axis captures an optimal decision boundary between the data manifolds. It’s optimal in the sense that each point on the decision axis is as far away from each data manifold as possible. Notice that in the first example, the decision axis coincides with the maximum margin line.

The decision axis can intuitively be thought of as a decision boundary that is optimal in the following sense. First, separates the class manifolds when they do not intersect. Second, each point of is as far away from the class manifolds as possible in the norm . As shown in the leftmost example in Figure 1, in the case of two linearly separable circles of equal radius, the decision axis is exactly the line that separates the data with maximum margin. For arbitrary manifolds, generalizes the notion of maximum margin to account for the curvature of the class manifolds.

Let be any set. The reach of is defined as . When is compact, the reach is achieved by the point on that is closest to under the norm. We will drop from the notation when it is understood from context.

Finally, an -tubular neighborhood of is defined as . That is, is the set of all points whose distance to under the metric induced by is less than . Note that while is -dimensional, is always -dimensional. Tubular neighborhoods are how we rigorously define adversarial examples. Consider a classifier for . An -adversarial example is a point such that . A classifier is robust to all -adversarial examples when correctly classifies not only , but all of . In this paper we will be primarily concerned with exploring the conditions under which we can provably learn a decision boundary that correctly classifies . When , the decision axis is one decision boundary that correctly classifies . Throughout the remainder of the paper we will drop the in from the notation, instead writing ; the norm will always be clear from context.

4 Limitations of Adversarial Training

Adversarial training, the process of training on adversarial examples generated in -balls around the training data, is a very natural approach to constructing robust models (Goodfellow et al. (2014), Madry et al. (2018)). In our notation this corresponds to training on samples drawn from for some . Despite its simplicity, adversarial training has proven to be one of the most successful approaches to training robust deep networks. While natural, we show that there are simple settings where this approach is much less sample-efficient than other classification algorithms, if the only guarantee is correctness in .

Definition 1 (Adversarial Training).

Let be a finite training set. Define an adversarial training algorithm as a learning algorithm that, given , outputs a model such that for every with label , and every , . Here denotes the ball centered at of radius in the norm.

is our theoretical model of the standard approach to adversarial training (Goodfellow et al. (2014), Madry et al. (2018)). In words, learns a model that outputs the same label for any -perturbation of up to as it outputs for . We will use to analyze the limitations of the standard approach to adversarial training; in particular we will show that is much less sample efficient at learning a robust decision boundary than other classification algorithms.

Theorem 1.

There exists a classification algorithm that, for a particular choice of , correctly classifies using exponentially fewer samples than are required for to correctly classify .

The reason for the sample inefficiency of is the use of the -balls centered on the data to propagate the labels. As we will show below, the union of the balls around the data covers a vanishingly small fraction of in high codimension settings. Thus the adversary is restricted to constructing adversarial examples in a negligible fraction of the neighborhood around the data manifold. In contrast, other algorithms, such as nearest neighbor classifiers, propagate labels using different geometric regions, such as the Voronoi cells which we will define in Section 5. The main takeaway of this paper is that the use of -balls centered on the data leads to sub-optimal results both in theory and, as we will show in Section 6, in practice.

Theorem 1 follows from Theorems 2 and 3. In Theorems 2 and 3 we will prove that a nearest neighbor classifier is one such classification algorithm. Nearest neighbor classifiers are naturally robust in high codimensions because the Voronoi cells of are elongated in the directions normal to when is dense (Dey (2007)).

Before we state Theorem 2 we must introduce a sampling condition on . A -cover of a manifold in the norm is a finite set of points such that for every there exists such that . Theorem 2 gives a sufficient sampling condition for to correctly classify for all manifolds . Theorem 2 also provides a sufficient sampling condition for a nearest neighbor classifier to correctly classify , which is substantially less dense than that of . Thus different classification algorithms have different sampling requirements in high codimensions.

Theorem 2.

Let be a -dimensional manifold and let for any . Let be a nearest neighbor classifier and let be the output of a learning algorithm as described above. Let denote the training sets for and respectively. We have the following sampling guarantees:

  1. If is a -cover for then correctly classifies .

  2. If is a -cover for then correctly classifies .

The bounds on in Theorem 2 are sufficient, but they are not always necessary. There exist manifolds where the bounds in Theorem 2 are pessimistic, and less dense samples corresponding to larger values of would suffice.

Next we will show a setting where bounds on similar to those in Theorem 2 are necessary. In this setting, the difference of a factor of in between the sampling requirements of and leads to an exponential gap between the sizes of and necessary to achieve identical robustness.

Define ; that is is a subset of the ---plane bounded between the coordinates . Similarly define . Note that lies in the subspace ; thus , where is the decision axis of . In the norm we can show that the gap in Theorem 2 is necessary for . Furthermore the bounds we derive for -covers for for both and are tight. Combined with well-known properties of covers, we get that the ratio is exponential in .

Theorem 3.

Let as described above. Let be minimum training sets necessary to guarantee that and correctly classify . Then we have that

(1)

We have shown that both and nearest neighbor classifiers learn robust decision boundaries when provided sufficiently dense samples of . However there are settings where nearest neighbors is exponentially more sample-efficient than in achieving the same amount of robustness.

To shed light on why the ball-based learning algorithm is so much less sample-efficient than nearest neighbor classifiers, we show that the volume is often a vanishingly small percentage of . For our theoretical model this means that a vanishingly small fraction of is guaranteed to have the correct label, and in practice this means that the adversary in adversarial training does not have the freedom to generate adversarial examples in the entirety of . For the remainder of this section we will consider the norm.

Theorem 4.

Let be a -dimensional manifold embedded in such that . Let be a finite set of points sampled from . Suppose that where is the medial axis of , defined as in Dey (2007). Then the percentage of covered by is upper bounded by

(2)

As the codimension , Equation 2 approaches , for any fixed .

In high codimension, even moderate under-sampling of leads to a significant loss of coverage of because the volume of the union of balls centered at the samples shrinks faster than the volume of . Theorem 4 states that in high codimensions the fraction of covered by goes to . Almost nothing is covered by for training set sizes that are realistic in practice. Thus is a poor model of , and high classification accuracy on does not imply high accuracy in .

Approaches that produce robust classifiers by generating adversarial examples in the -balls centered on the training set do not accurately model , and it will take many more samples to do so. If the method behaves arbitrarily outside of the -balls that define , adversarial examples will still exist and it will likely be easy to find them. The reason deep learning has performed so well on a variety of tasks, in spite of the brittleness made apparent by adversarial examples, is because it is much easier to perform well on than it is to perform well on .

5 Adversarial Training with Voronoi Constraints

Madry et al. (2018) formalize adversarial training by introducing the robust objective

(3)

where is the data distribution and is a -ball centered at with radius . Their main contribution was the use of a strong adversary which used projected gradient descent to solve the inner optimization problem.

In Sections 4, we showed that adversarial training formalized using the geometric constraint is sample inefficient, because the adversary is restricted to a negligible fraction of the -tubular neighborhood around the data distribution. To remedy this we replace the -ball constraint with a different geometric constraint, namely the Voronoi cell at . That is, we formalize the adversarial training objective as

(4)

where

(5)

In words, the Voronoi cell of is the set of all points in that are closer to than to any other sample in .

The Voronoi cell constraint has many advantages over the -ball constraint. First the Voronoi cells partition the entirety of and so the interiors of Voronoi cells generated by samples from different classes do not intersect. This is in contrast to -balls which may intersect for sufficiently large and cause problems for optimization. In particular the Voronoi cells partition and, for dense samples, are elongated in the directions normal to the data manifold. Thus the Voronoi cells are well suited for high codimension settings. Second, the size of the Voronoi cells adapts to the data distribution. A Voronoi cell generated by a sample which is close to samples from a different class manifold is smaller, while those further away are larger. Thus we do not need to set a value for in the optimization procedure. The constraint naturally adapts to the largest value of possible locally on the data manifold. Third, the Voronoi cells enjoy the sample efficiency of in Theorem 2, because the Voronoi cells define the nearest neighbor decision boundary. In summary, the Voronoi constraint gives the adversary the freedom to explore the entirety of the tubular neighborhood around .

At each iteration we must solve the inner optimization problem

(6)
subject to

When the Voronoi cells are convex and so we can project a point onto a Voronoi cell by solving a quadratic program. Thus we can solve Problem 6 using projected gradient descent, as in Madry et al. (2018). When the Voronoi cells are not necessarily convex. In this setting there are many approaches, such as barrier and penalty methods, one might employ to approximately solve Problem 6 (Boyd and Vandenberghe (2004)). However we found that the following heuristic is both fast and works well in practice. At each iteration of the outer training loop, for each training sample in a batch, we generate adversarial examples by taking iterative steps in the direction of the gradient starting from . However instead of projecting onto a constraint after each iterative step, we instead check if any of the Voronoi constraints of shown in Equation 5 are violated. If no constraint is violated we perform the iterative update, otherwise we simply stop performing updates for .

Problem 6 has constraints, one for each sample in . In practice however very few samples contribute to the Voronoi cell of . At each iteration, we perform a nearest neighbor search query to find the nearest samples to in each other class. That is we search for samples where is the number of classes. We do not impose constraints from samples in the same class as ; there is no benefit to restricting the adversary’s movement with the tubular neighborhood around the class manifold of . In our experiments we set .

6 Experiments

Datasets. To investigate how the codimension of a dataset influences robustness we introduce two synthetic datasets, Circles and Planes, which allow us to carefully vary the codimension while maintaining dense samples. The Circles dataset consists of two concentric circles in the --plane, the first with radius and the second with radius , so that . We densely sample random points on each circle for both the training and the test sets. The Planes dataset consists of two -dimensional planes, the first in the and the second in , so that . The first two axis of both planes are bounded as , while . We sample the training set at the vertices of a regular grid with side length , and the test set at the centers of the grid cubes. We also evaluate on MNIST and CIFAR-10.

Models.

Our controlled experiments on synthetic data consider a fully connected network with 1 hidden layer, 100 hidden units, and ReLU activations. We set the learning rate for Adam (

Kingma and Ba (2015)) as . Our experimental results are averaged over 20 retrainings. For a fair comparison, our experiments on MNIST and CIFAR-10 use the same model architectures as in Madry et al. (2018)

. We train the MNIST model using Adam for 100 epochs and the CIFAR-10 model using SGD for 250 epochs.

Attacks. We consider two attacks, the fast gradient sign method (FGSM) (Goodfellow et al. (2014)) and the basic iterative method (BIM) (Kurakin et al. (2016)). We use the implementations provided in the cleverhans library Papernot et al. (2018)

Accuracy measures. We plot the minimum classification accuracy across our suite of attacks as a function of , for each of our datasets. Additionally we report the normalized area under the curve (NAUC) defined as

(7)

where measures the classification accuracy and is the largest perturbation considered. Note that NAUC with higher values corresponding to more robust models.

6.1 High Codimension Reduces Robustness

Section 4 suggests that as the codimension increases it should become easier to find adversarial examples, which Figure 2 (Top Left) shows on the Circles dataset. We see a steady decrease in robustness as we increase the codimension.

6.2 Adversarial Training in High Codimensions

Figure 2 (Top Right, Bottom Left, Bottom Right) explores the use of adversarial training to improve robustness in high codimension settings for the Planes dataset. As shown in Figure 2 (Top Right), as the codimension increases the adversarial training approach of Madry et al. (2018) becomes less robust. This is because the balls around cover a smaller fraction of the tubular neighborhood around , as predicted by Theorem 4. In Appendix B.1 we show that even significantly increasing the sampling density does not notably improve robustness in high codimensions.

Replacing the ball constraint with the Voronoi cells improves robustness in high codimension settings, on average. In codimension 10 (Figure 2 (Bottom Left)), our approach achieves NAUC of , while Madry’s approach achieves NAUC of . In codimension 500 (Figure 2 (Bottom Right)), our approach achieves NAUC of , while Madry’s approach achieves NAUC of .

Figure 2: Top Right: As the codimension increases the robustness of decision boundaries learned by Adam on naturally trained networks for Circles decreases steadily. Top Left: Training using the adversarial training procedure of Madry et al. (2018) is no guarantee of robustness; as the codimension increases it becomes easier to find adversarial examples for Planes. Bottom: Training using adversarial training with Voronoi constraints offers improved robustness in high codimension settings, on average.

6.3 MNIST and CIFAR-10

To explore the performances of adversarial training with Voronoi constraints on more realistic datasets, we evaluate on MNIST and CIFAR-10 and compare against the robust pretrained models of Madry et al. (2018).

Figure 3 (Left) shows that our model maintains near identical robustness to the Madry model on MNIST up to , after which our model significantly outperforms the Madry model. The Madry model was explicitly trained for perturbations. We emphasize that one advantage of our approach is that we did not need to set a value for the maximum perturbation size . The Voronoi cells adapt to the maximum size allowable locally on the data distribution. Our model maintains accuracy at compared to accuracy for the Madry model. Furthermore our model achieves NAUC of , while the Madry model achieves NAUC of , an improvement of over the baseline. To our knowledge, this is the most robust MNIST model to attacks.

Figure 3 (Right) shows the results of our approach on CIFAR-10. Both our model and the Madry model achieve NAUC of . However our approach trades natural accuracy for increased robustness against larger perturbations. This tradeoff is well-known and explored in Tsipras et al. (2019).

Figure 3: Left: Adversarial training with Voronoi constraints on MNIST. Our model has NAUC and high classification accuracy after . In particular, our model maintains accuracy at , compared to accuracy for the Madry model. Right: On CIFAR-10, both models achieve NAUC of , but our model trades natural accuracy for robustness to larger perturbations.

A natural approach to improving the robustness of models produced by the adversarial training paradigm of Madry et al. (2018) is to simply increase the maximum allowable perturbation size of the norm ball constraint. As shown in Figure 4, increasing the size of to , from the with which Madry et al. (2018) originally trained, and training for only epochs produces a model which exhibits significantly worse robustness in the range than the pretrained model. If we increase the number of training epochs to , the approach of Madry et al. (2018) with produces a model with improved robustness in the range , but that still exhibits the sharp drop in accuracy after . Additionally the model trained with for epochs performs worse than both the pretrained model and our model in the range . Our model achieves NAUC , while the model trained with for epochs achieves NAUC . We emphasize that our approach does not require us to set , which is particularly important in practice where the maximum amount of robustness achievable may not be known a-priori.

Figure 4: The adversarial training of Madry et al. (2018) with (shown in green) produces a model with significantly reduced robustness in the range . Increasing the number of epochs to , the resulting model (shown in purple) does exhibit improved robustness in the range , at the expense of some robustness in the range and still exhibits a sharp drop in accuracy after . The purple model achieves NAUC of , while our model achieves NAUC .

7 Conclusions

The -ball constraint for describing adversarial perturbations has been a productive formalization for designing robust deep networks. However, the use of -balls has significant drawbacks in high-codimension settings and leads to sub-optimal results in practice. Adversarial training with Voronoi constraints improves robustness by giving the adversary the freedom to explore and generate adversarial examples close to .

References

Appendix A Omitted Proofs

a.1 Proof of Theorem 2

Proof.

Here we use to denote the metric induced by the norm. We begin by proving (1). Let be any point in . Suppose without loss of generality that for some class . The distance from to any other data manifold , and thus any sample on , is lower bounded by . See Figure 5. It is then both necessary and sufficient that there exists a such that for . (Necessary since a properly placed sample on can achieve the lower bound on .) The distance from to the nearest sample on is for some . The question is how large can we allow to be and still guarantee that correctly classifies ? We need

which implies that . It follows that a -cover with is sufficient, and in some cases necessary, to guarantee that correctly classifies .

Next we prove (2). As before let . It is both necessary and sufficient for for some sample to guarantee that , by definition of . The distance to the nearest sample on is for some . Thus it suffices that . ∎

Figure 5: Proof of Theorem 2. The distance from a query point to , and thus the closest incorrectly labeled sample, is lower bounded by the distance necessary to reach the medial axis plus the distance from to .

a.2 Proof of Theorem 3

Proof.

Let . Since is flat, the distance from to the nearest sample is bounded as . For we need that , and so it suffices that . In this setting, this is also necessary; should be any larger a property placed sample on can claim in its Voronoi cell.

Similarly for we need that , and so it suffices that . In this setting, this is also necessary; should be any larger, lies outside of every -ball and so is free to learn a decision boundary that misclassifies .

Let denote the size of the minimum -cover of . Since is flat (has no curvature) and since the intersection of with a -ball centered at a point on is a -ball, a standard volume argument can be applied in the affine subspace to conclude that . So we have

a.3 Proof of Theorem 4

Proof.

Assuming the balls centered on the samples in are disjoint we get the upper bound

(8)

The medial axis of is defined as the closure of the set of all points in that have two or more closest points on in the norm . The medial axis is similar to the decision axis , except that the nearest points do not need to be on distinct class manifolds. For , we have the lower bound

(9)

Combining Equations 8 and 9 gives the result. To get the asymptotic result we apply Stirling’s approximation to get

The last step follows from the fact that , where is the base of the natural logarithm. ∎

Appendix B Additional Experiments

b.1 Increasing Sampling Density

The Planes dataset is sampled so that the trianing set is a -cover of the underlying planes, which requires 450 sample points. Figure 6 shows the results of increasing the sampling density to a -cover (1682 samples) and a -cover (6498 samples). In low-codimension, increasing the sampling density improves the robustness of adversarial training. However, in high-codimension, even a substantial increase in the number of samples gives a only a small improvement in robustness.

Figure 6: Adversarial training of Madry et al. (2018) on the Planes dataset with a -cover (left), consisting of samples, a -cover (center), samples, and a -cover (right), samples. Increasing the sampling density improves robustness at the same codimension, but the improvement is much less notable in high-codimension.