DeepAI
Log In Sign Up

Smoothed Embeddings for Certified Few-Shot Learning

02/02/2022
by   Mikhail Pautov, et al.
0

Randomized smoothing is considered to be the state-of-the-art provable defense against adversarial perturbations. However, it heavily exploits the fact that classifiers map input objects to class probabilities and do not focus on the ones that learn a metric space in which classification is performed by computing distances to embeddings of classes prototypes. In this work, we extend randomized smoothing to few-shot learning models that map inputs to normalized embeddings. We provide analysis of Lipschitz continuity of such models and derive robustness certificate against ℓ_2-bounded perturbations that may be useful in few-shot learning scenarios. Our theoretical results are confirmed by experiments on different datasets.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

02/19/2020

Randomized Smoothing of All Shapes and Sizes

Randomized smoothing is a recently proposed defense against adversarial ...
06/17/2021

Episode Adaptive Embedding Networks for Few-shot Learning

Few-shot learning aims to learn a classifier using a few labelled instan...
03/15/2017

Prototypical Networks for Few-shot Learning

We propose prototypical networks for the problem of few-shot classificat...
04/04/2020

Optimization of Image Embeddings for Few Shot Learning

In this paper we improve the image embeddings generated in the graph neu...
07/05/2022

UniCR: Universally Approximated Certified Robustness via Randomized Smoothing

We study certified robustness of machine learning classifiers against ad...
02/27/2020

Certification of Semantic Perturbations via Randomized Smoothing

We introduce a novel certification method for parametrized perturbations...
08/01/2021

Certified Defense via Latent Space Randomized Smoothing with Orthogonal Encoders

Randomized Smoothing (RS), being one of few provable defenses, has been ...

1 Introduction

In the regime of scarce data or when new classes emerge constantly, such as in face recognition,

few-shot learning

is required. Modern computer vision methods of learning from a few images are based on deep neural networks. These neural networks are intriguingly vulnerable to adversarial perturbations

(Szegedy et al., 2013; Goodfellow et al., 2014) – accurately crafted small modifications of the input that may significantly alter the model’s prediction. For safety-critical scenarios, these perturbations present a serious threat. Hence, it is important to investigate ways to protect neural networks from undesired scenarios.

Several works studied this phenomena in different applications of neural networks – image classification (Carlini & Wagner, 2017; Moosavi-Dezfooli et al., 2016, 2017; Su et al., 2019), object detection (Kaziakhmedov et al., 2019; Li et al., 2021a; Wu et al., 2020; Xie et al., 2017b), face recognition (Komkov & Petiushko, 2021; Dong et al., 2019; Zhong & Deng, 2020), semantic segmentation (Fischer et al., 2017; Hendrik Metzen et al., 2017; Xie et al., 2017b). This shows how easy for adversaries to maliciously force a model to behave in the desired way. As a result, defenses, both empirical (Dhillon et al., 2018; Zhou et al., 2020; Jang et al., 2019) and provable (Yang et al., 2020; Lecuyer et al., 2019; Cohen et al., 2019; Wong & Kolter, 2018; Zhang et al., 2020; Jia et al., 2019; Weng et al., 2019; Pautov et al., 2021), were proposed recently. Although empirical ones can be (and often are) broken by more powerful attacks, the provable ones are of a big interest since they make it possible to provide guarantees of the correctness of the work of a model under certain assumptions, and, thus, possibly broaden the scope of tasks which may be trusted to the neural networks.

Randomized smoothing (Lecuyer et al., 2019; Cohen et al., 2019; Li et al., 2018) is the state-of-the-art approach used for constructing classification models provably robust against small-norm additive adversarial perturbations. This approach is scalable to large datasets and can be applied to any classifier since it does not use any assumptions about model’s architecture. Generally, the idea is following. Suppose, we are given a base neural network classier that maps an input image to a fixed number of

class probabilities. Its smoothed version with the standard Gaussian distribution is:

(1)

Interestingly, as shown in (Cohen et al., 2019), the new (smoothed) classifier is provably robust at to -bounded perturbations if the base classifier is confident enough at . However, the proof of certification heavily exploits the fact that classifiers are restricted to map an input to a fixed number of class probabilities. Thus, directly applying randomized smoothing to classifiers in metric space, such as in few-shot learning, is a challenging task.

There are several works that aim at improving the robustness for few-shot classification (Kumar & Goldstein, 2021; Goldblum et al., 2020; Zhang et al., 2019; Liu et al., 2021). However, the focus in such works is either on the improvement of empirical robustness or probabilistic guarantees of certified robustness; none of them provide theoretical guarantees on the worst-case model behavior.

YesNo
Figure 1: Illustration of certification pipeline for a single image . Given a model and realisations of zero-mean Gaussian noise

, an estimate

of is computed. Note that is -Lipschitz with according to Theorem 3.1. The procedure is repeated until adversarial embedding risk from Theorem 3.3 is computed with Algorithms 2 and 1 and certified radius is determined. The model is treated as certified at for all additive perturbations .

In this work, we fill this gap, by generalizing and theoretically justifying the idea of randomized smoothing to few-shot learning. In this scenario, provable certification needs to be obtained not in the space of output class probabilities, but in the space of descriptive embeddings. This work is the first, to our knowledge, where the theoretical robustness guarantees for few-shot scenario is provided.

Our contributions are summarized as follows:

  • We provide the first theoretical robustness guarantee for few-shot learning classification task.

  • Analysis of Lipschitz continuity of such models and providing the robustness certificates against bounded perturbations for few-shot learning scenarios.

  • We propose to estimate confidence intervals not for distances between the approximation of smoothed embedding and class prototype but for the dot product of vectors which has expectation equal to the distance between actual smoothed embedding and class prototype.

2 Problem statement

2.1 Notation

We consider a few-shot classification problem where we are given a set of labeled objects where and are corresponding labels. We follow the notation from (Snell et al., 2017) and denote as the set of objects of class

2.2 Few-shot learning classification

Suppose we have a function that maps input objects to the space of normalized embeddings. Then, dimensional prototypes of classes are computed as follows (expression is given for the prototype of class ):

(2)

In order to classify a sample, one should compute the distances between its embedding and class prototypes – a sample is assigned to the class with the closest prototype. Namely, given a distance function , the class of the sample is computed as below:

(3)

Given an embedding function , our goal is to construct a classifier provably robust to additive perturbations of a small norm. In other words, we want to find a norm threshold such that equality

(4)

will be satisfied for all

In this paper, the solution of a problem of constructing a classifier that satisfies the condition in Equation 4 is approached by extending the analysis of the robustness of smoothed classifiers described in Equation 1 to the case of vector functions. The choice of the distance metric in Equation 4 is motivated by an analysis of Lipschitz-continuity given in the next section.

3 Randomized smoothing

3.1 Background

In the original literature (Lecuyer et al., 2019; Cohen et al., 2019) the randomized smoothing is described as a technique of convolving a base classifier with an isotropic Gaussian noise such that the new classifier returns the most probable prediction of

of a random variable

, where the choice of Gaussian distribution is motivated by the restriction on to be robust against additive perturbations of bounded norm. In this case, given a classifier and smoothing distribution , the classifier looks as follows:

(5)

One can show by Stein’s Lemma that given the fact that the function in Equation 5 is bounded (namely, ), then the function is Lipschitz:

(6)

with , what immediately produce theoretical robustness guarantee on

Although this approach is simple and effective, it has a serious drawback: in practice, it is impossible to compute the expectation in Equation 5 exactly and, thus, impossible to compute the prediction of the smoothed function at any point. Instead the integral is computed with the use of Monte-Carlo approximation with samples to obtain the prediction with an arbitrary level of confidence. Notably, to achieve an appropriate accuracy of the Monte-Carlo approximation, the number of samples should be large enough that may dramatically affect inference time.

In this work, we generalize the analysis of Lipschitz-continuity to the case of vector functions and provide robustness guarantees for classification performed in the space of embeddings. The certification pipeline is illustrated in Figure 1.

Figure 2: Illustration of Theorem 3.3, one-shot case. The direction of adversarial risk in the space of embeddings is always parallel to the vector . This is also true for the case of .

3.2 Randomized smoothing for vector functions

Lipschitz-continuity of vector function.

In the work of (Salman et al., 2019), the robustness guarantee from (Cohen et al., 2019) is proved by estimating the Lipschitz constant of a smoothed classifier. Unfortunately, a straightforward generalization of this approach to the case of vector functions leads to the estimation of the expectation of the norm of a multivariate Gaussian which is known to depend on the number of dimensions of the space. Instead, we show that a simple adjustment to this technique may be done such that the estimate of the Lipschitz constant is the same as for the function in Equation 5. Our results are formulated in the theorems below proofs of which are moved to the Appendix in order not to distract the reader.

Theorem 3.1.

(Lipschitz-continuity of smoothed vector function) Suppose that is a deterministic function and is continuously differentiable for all . If for all , , then is Lipschitz in norm with

Remark 3.2.

We perform the analysis of Lipschitz-continuity in Theorem 3.1 in norm, so the distance metric in Equation 4 is distance. We do not consider other norms in this paper.

Robust classification in the space of embeddings.

To provide certification for a classification in the space of embeddings, one should estimate the maximum deviation of the classified embedding that does not change the closest class prototype. In the theorem 3.3, we show how this deviation is connected with the mutual arrangement of embedding and class prototype.

  Function CLOSEST()
  
  
  while  do
     
     
     
     
     for all  do
        
         TwoSidedConfInt(
        
     end for
     
     if  then
        Return A, gs
     else
         {Continue to compute approximations until the number of observations is large enough to determine two leftmost intervals or until }
     end if
  end while
  EndFunction
Algorithm 1 Closest prototype computation algorithm. Given base classifier , noise level , object , number of samples for computing single observation , set of prototypes of classes , threshold of the number of observations of and confidence level , returns index of the closest to prototype.
Theorem 3.3.

(Adversarial embedding risk) Given an input image and the embedding the closest point on to decision boundary in the embedding space (see Figure 2) is located at a distance (defined as adversarial embedding risk):

(7)

where and are the two closest prototypes. The value of is the distance between classifying embedding and the decision boundary between classes represented by and Note that this is the minimum distortion in the embedding space required to change the prediction of

Two previous results combined give a robustness guarantee for few-shot classification that is formulated as follows:

Theorem 3.4.

(Robustness guarantee) -robustness guarantee for an input image in the dimensional input metric space under classification by a classifier from Theorem 3.1 is:

(8)

where is the Lipschitz constant from Theorem 3.1 and is the adversarial risk from Theorem 3.3. The value of is the certified radius of at , or, in other words, minimum distortion in the input space required to change the prediction of The proof of this fact straightforwardly follows from the definition Equation 6 and results from Theorems 3.3 and 3.1.

4 Certification protocol

In this section, we describe the numerical implementation of our approach and estimate the fail probability of numerical procedures used.

4.1 Estimation of prediction of smoothed classifier

As mentioned in the previous sections, in the few-shot setting, the procedure of classification is performed by assigning an object to the closest class prototype. Unfortunately, given the smoothed function in the form from Theorem 3.1 and class prototype from Equation 2, it is impossible to compute the value explicitly as well as to determine the closest prototype, since it is in general unknown how does look like. In our work, we propose both

  • to estimate the closest prototype for classification and

  • to estimate the distance to the closest decision boundary from Theorem 3.3 as the largest class-preserving perturbation in the space of embeddings

by computing two-sided confidence intervals for random variables

(9)

where

(10)

is the estimation of computed as empirical mean by samples of noise, and one-sided confidence interval for from Theorem 3.3, respectively. Pseudo-code for both procedures is presented in Algorithms 2 and 1.

  Function EMBEDDING-RISK()
   CLOSEST()
   CLOSEST()
  
  
  for all  do
     
     
  end for
  
  Return
  EndFunction
Algorithm 2 Adversarial embedding risk computation algorithm. Given base classifier , noise level , object , number of samples for computing single observation , set of prototypes of classes , threshold of the number of observations of and confidence level , returns lower bound for the adversarial risk from Theorem 3.3.

The Algorithm 1 describes an inference procedure for the smoothed classifier from Theorem 3.1; the Algorithm 2 uses Algorithm 1 and, given input parameters, estimates an adversarial risk from Theorem 3.3 in the following way:

  • Firstly, Algorithm 1 determines the closest to smoothed embedding prototype among all prototypes and returns computed approximate smoothed embeddings used in computation of the closest prototype ;

  • Secondly, the second closest prototype and approximations are computed in a similar way;

  • Thirdly, given and set of approximate smoothed embeddings , the empirical adversarial risk from Theorem 3.3 is computed for all ;

  • Finally, given the observations from the previous step, lower confidence bound for the adversarial risk is computed.

Combined with analysis from Theorem 3.1, it provides the certified radius for a sample – the smallest value of norm of perturbation in the input space required to change the prediction of the smoothed classifier.

In the next subsection, we discuss in detail the procedure of computing confidence intervals in Algorithms 2 and 1.

4.2 Analysis of applicability of algorithms

The computations of smoothed function and distances to class prototypes and decision boundary in Algorithm 1 and Algorithm 2, respectively, are numerical and operate with estimations of a random variable, thus, it is necessary to analyze their applicability. In this section, we propose a way to compute confidence intervals for squares of the distances between estimates of embeddings in the form from Equation 10 and class prototypes.

Computation of confidence intervals for the squares of distances. Recall that one way to estimate the value of a parameter of a random variable is to compute a confidence interval for the corresponding statistic. In this work, we construct intervals by applying well-known Hoeffding inequality (Hoeffding, 1994) in the form

(11)

where and are sample mean and population mean of random variable , respectively, is the number of samples and numbers are such that .

However, a confidence interval for the distance with a certain confidence covers an expectation of distance , not the distance for expectation

To solve this problem, we propose to compute confidence intervals for the dot product of vectors. Namely, given a quantity

(12)

we sample

its unbiased estimates with

samples of noise for each (here we have to mention that the number of samples from Algorithm 1 actually doubles since we need a pair of estimates of smoothed embeddings):

(13)
(14)

and compute confidence interval

(15)

such that given

(16)

the population mean is most probably located within it:

(17)

Note that the population mean is exactly , since

(18)
(19)
(20)

since and are independent random variables for . Finally, note that the confidence interval for the quantity implies confidence interval

(21)

for the quantity Thus, the procedures TwoSidedConfInt and LowerConfBound from algorithms return an interval from Equation 21 and its left bound for the random variable representing corresponding distance, respectively.

(a) Cub-200-2011
(b) CIFAR-FS
(c) miniImageNet
Figure 6: Dependency of certified accuracy on attack radius for different values of , 1-shot case, .
(a) Cub-200-2011
(b) CIFAR-FS
(c) miniImageNet
Figure 10: Dependency of certified accuracy on attack radius for different values of , 5-shot case, .

5 Experiments

5.1 Datasets

For the experimental evaluation of our approach we use several well-known datasets for few-shot learning classification. Cub-200-2011 (Wah et al., 2011) is a dataset with images of bird species, where images of species are in the train subset and images of other

species are in the test subset. It is notable that a lot of species presented in dataset have degree of visual similarity, making classification of ones a challenging task even for humans.

miniImageNet (Vinyals et al., 2016) is a substet of images from ILSVRC 2015 (Russakovsky et al., 2015) dataset with images categories in train subset and categories in test subset with images of size in each category. CIFAR FS (Bertinetto et al., 2018) is a subset of CIFAR 100 (Krizhevsky et al., 2009) dataset which was generated in the same way as miniImageNet and contains images of categories in the train set and images of categories in the test set. Experimental setup for all the datasets is presented in the next section.

5.2 Experimental settings and computation cost

Following (Cohen et al., 2019), we compute approximate certified test set accuracy to estimate the performance of the smoothed model prediction with Algorithm 1 and embedding risk computation with Algorithm 2. The baseline model we used for experiments is a prototypical network introduced in (Snell et al., 2017) with ConvNet-4 backbone. Compared to the original architecture, an additional fully-connected layer was added in the tail of the network to map embeddings to 512-dimensional vector space. The model was trained to solve 1-shot and 5-shot classification tasks on each dataset, with 5-way classification on each iteration.

Parameters of expeiments.

For data augmentation, we applied Gaussian noise with zero mean, unit variance and probability

of augmentation. Each dataset was certified on a subsample of 500 images with default parameters for Algorithm 1: number of samples , confidence level and variance , unless stated otherwise. For our settings, it may be shown from simple geometry that values from Equation 11 are such that so we use The number of computing of approximation of smoothed function in Algorithm 1 is set to be

Computation cost. In the table below, we report the computation time of the certification procedure per image on Tesla V100 GPU for Cub-200-2011

dataset. Standard deviation in seconds appears to be significant because the number of main loop iterations required to separate the two leftmost confidence intervals varies from image to image in the test set.

n 1000 3000 5000
t, sec 73.1 68.0 221.9 194.7 300.6 296.1
Table 1: Computation time per image of implementation of Algorithm 2, Cub-200-2011.

5.3 Results of experiments

In this section, we report results of our experiments. In our evaluation protocol, we compute approximate certified test set accuracy, . Given a sample , a smoothed classifier from Theorem 3.1 with an assigned classification rule

(22)

threshold value for norm of additive perturbation and the robustness guarantee from Theorem 3.4, we compute on test set as follows:

(23)

In other words, we treat the model as certified at point under perturbation of norm if is correctly classified by (what means that the procedure of classification described in Algorithm 1 does not abstain from classification of ) and has the value of certified radius at bigger than

Visualization of results. The Figures 10 and 6 from below represent dependencies of certified accuracy on the value of norm of additive perturbation (attack radius) for different learning settings (1-shot and 5-shot learning). The value of attack radius corresponds to the threshold from Equation 23. For Cub-200-2011 dataset we provide a dependency of certified accuracy for different number of samples for Algorithm 1 (in Figure 11).

6 Limitations

In this section, we describe limitations of our approach. Namely, we provide failure probability of Algorithms 2 and 1 and discuss abstains from classification in Algorithm 1.

6.1 Estimation of errors of algorithms

Note that the value of from Equation 16 is the probability of the value of not to belong to the corresponding interval of the form from Equation 21.

Given a sample , the procedure in Algorithm 1 returns two closest prototypes to the Note that if two leftmost confidence intervals are determined, two closest to prototypes are established with probability at least for each one, thus, according to the independence of computing these two intervals, the error probability for Algorithm 1 is . This value corresponds to returning a pair of class prototypes at least one of which is not actually among the two closest to prototypes given the fact that Algorithm 1 does not abstain.

Similarly, the procedure in Algorithm 2 outputs the lower bound for the adversarial risk with coverage at least and depend on the output of Algorithm 1 inside, and, thus, has error probability that corresponds to returning an overestimated lower bound for the adversarial risk from Theorem 3.3.

Figure 11: Dependency of certified accuracy on attack radius for different number of samples from Algorithm 1, Cub-200-2011 dataset, 1-shot case. Is is notable that relatively small number of samples may be used to achieve satisfactory level of certified accuracy. Indeed, the gain in certified accuracy with growth of number of samples is incomparable to increases of computations (see Table 1).

6.2 Abstains from classification

It is crucial to note that the procedure in Algorithm 1 may require a lot of observations of approximation of smoothed embedding to distinguish two leftmost confidence intervals and sometimes does not finish before reaching threshold value of numbers of iterations. Hence, there may be samples for which the inference protocol of Algorithm 1 does not finish in a reasonable number of iterations, and, thus, associated smoothed classifier can be neither evaluated nor certified at these points. In this subsection, we report the numbers of objects in which Algorithm 1 abstains from determining the closest prototype. The fractions of abstained samples for different values of confidence level for both 1-shot and 5-shot scenarios are reported in the Tables 3 and 2 below.

Cub-200-2011 23.8% 28.0% 32.0%
CIFAR-FS 26.0% 30.8% 34.2%
miniImageNet 26.6% 32.0% 36.2%
Table 2: Percentage of non-certified objects in test subset, 1-shot case.
Cub-200-2011 26.2% 31.2% 35.8%
CIFAR-FS 25.8% 31.0% 34.0%
miniImageNet 26.0% 31.4% 34.6%
Table 3: Percentage of non-certified objects in test subset, 5-shot case.

7 Related work

Breaking neural networks with adversarial attacks and empirical defending from them have a long history of cat-and-mouse game. Namely, for a particular proposed defense against existing adversarial perturbations, a new more aggressive attack is found. This motivated researchers to find defenses that are mathematically provable and certifiably robust to different kinds of input manipulations. Several works proposed exactly verified neural networks based on Satisfiability Modulo Theories solvers (Katz et al., 2017; Ehlers, 2017)

, or mixed-integer linear programming

(Lomuscio & Maganti, 2017; Fischetti & Jo, 2017). These methods are found to be computationally inefficient, although they guarantee to find adversarial examples, in the case if they exist. Another line of works use more relaxed certification (Wong & Kolter, 2018; Gowal et al., 2018; Raghunathan et al., 2018). Although these methods aim to guarantee that an adversary does not exist in a certain region around a given input, they suffer from scalability to big networks and large datasets. The only scalable to large datasets provable defense against adversarial perturbations is randomized smoothing. Initially, it was found as an empirical defense to mitigate adversarial effects in neural networks (Liu et al., 2018; Xie et al., 2017a). Later several works showed its mathematical proof of certified robustness (Lecuyer et al., 2019; Li et al., 2018; Cohen et al., 2019; Salman et al., 2019). Lecuyer et al (Lecuyer et al., 2019) first provided proof of certificates against adversarial examples using differential privacy. Then Li et al (Li et al., 2018) proposed tighter bounds of guarantees using Renyi divergence. Later, Cohen et al (Cohen et al., 2019) provided the tightest bound using Neyman-Pearson lemma. Interestingly, alternative proof using Lipschitz continuity was found (Salman et al., 2019). The scalability and simplicity of randomized smoothing attracted significant attention, and it was extended beyond perturbations (Lee et al., 2019; Teng et al., 2019; Li et al., 2021b; Levine & Feizi, 2020b, a; Kumar & Goldstein, 2021; Mohapatra et al., 2020; Yang et al., 2020). Perhaps, (Kumar & Goldstein, 2021) is the closest work to ours, where authors extend randomized smoothing to vector-valued metric spaces with IoU/Jaccard distance in cases of object localization, perceptual distance for generative models, and total-variation. However, their work does not consider descriptive embeddings for few-shot learning.

8 Conclusion and future work

In this work, we extended randomized smoothing as a defense against additive norm-bounded adversarial attacks to the case of classification in the embedding space that is used in few-shot learning scenarios. We performed an analysis of Lipschitz continuity of smoothed normalized embeddings and derived a robustness certificate against attacks. Our theoretical findings are supported experimentally on several datasets. There are several directions for future work: our approach can possibly be extended to other types of attacks, such semantic transformations; also, it is important to reduce the computational complexity of the certification procedure.

References

Appendix A Proofs.

In this section, we provide proofs of the main results stated in our work.

Theorem A.1.

Suppose that is a deterministic function with corresponding continuously differentiable smoothed function . Then if for all , , then is Lipschitz in norm with

Proof.

It is known that everywhere differentiable function with Jacobian matrix is Lipschitz in norm with where is the spectral norm of

Taking into account the fact that

(24)

we may derive its Jacobian matrix:

(25)
(26)

where In order to estimate the spectral norm of , one can estimate the norm of dot product with normalized vector :

(27)

Here, we apply a trick: it is possible to rotate vectors in dot product in such a way that one of the resulting vectors will have one nonzero component after rotation (without loss of generality, assume that this is the first component, ). Namely, given a rotation matrix that is unitary (), the expression from Eq. 27 becomes

(28)

Now, since the rotation does not affect the norm, and thus and . More than that, under the change of the variables the following holds:

  • since rotation is norm preserving operation;

  • ;

  • for the diffentials, leading to

Thus, expression from Eq. 28 becomes

(29)

Now, we bound the norm from Eq. 29 using Cauchy–Schwarz inequality:

(30)
(31)

since and , and . Here is the first component of .

The expectation from Eq. 30 is known to be equal to and, thus

(32)

Taking a supremum over all unit vectors , we immediately get what finalizes the proof.

Theorem A.2.

(Adversarial embedding risk) Given an input image and the embedding the closest point on to decision boundary in the embedding space (see Figure 2) is located at a distance (defined as adversarial embedding risk):

(33)

where and are the two closest prototypes. The value of is the distance between classifying embedding and the decision boundary between classes represented by and Note that this is the minimum distortion in the embedding space required to change the prediction of

Proof.

For the convenience, redraw Figure 2 in the Figure 12 with labeled points.

Figure 12: The direction of adversarial risk in the space of embeddings is always parallel to the vector . This is also true for the case of .

In the Figure 12, is the origin, , , , , , .

We need to solve following problem:

(34)
(35)

It is obvious that to satisfy minimality requirement we shoul consider . Thus we have is perpendicular to the ray . Therefore, to minimize , we need to find distance from to the ray .

The closest distance is perpendicular to .

(36)
(37)
(38)
(39)

Solving equation implies

(40)
(41)
(42)
(43)
(44)

Using the fact that we find that

(45)

Appendix B Additional experiments.

In this section, we provide results of additional experiments.

b.1 Dependency of certification accuracy on the level of confidence .

It is desirable to understand how the certified accuracy from Equation 23 depends on the confidence level and, hence, on the probabilities of failure of Algorithms 2 and 1 discussed in Section 6.1. In the Figures 20 and 16, we report the difference in certification accuracy given several values of In the Table 4, the correspondence between these values of and error probabilities and is reported.

(a) Cub-200-2011
(b) CIFAR-FS
(c) miniImageNet
Figure 16: Dependency of certified accuracy on attack radius for different values of , 1-shot case, .
(a) Cub-200-2011
(b) CIFAR-FS
(c) miniImageNet
Figure 20: Dependency of certified accuracy on attack radius for different values of , 5-shot case, .
0.0199 0.001999 0.00019999
0.029701 0.002997 0.00029997
Table 4: Dependency of the error probabilities and from Section 6.1 on confidence level .

It is notable that the drop in certified accuracy does not exceed when the probability of failure of both Algorithms 2 and 1 is decreased by the factor of