The widespread deployment of machine learning models, coupled with the discovery of their fragility against carefully crafted manipulation of training and/or test samples[Biggio and Roli2017, Grosse et al.2017a, Szegedy et al.2013], calls for safe approaches to AI to enable their use in safety-critical applications, as argued, e.g., in [Seshia, Sadigh, and Sastry2016, Dreossi, Donzé, and Seshia2017]. Bayesian techniques, in particular, provide a principled way of combining a-priori information into the training process, so as to obtain an a-posteriori distribution on test data, which also takes into account the uncertainty in the learning process. Recent advances in Bayesian learning include adversarial attacks [Grosse et al.2017b]Gal and Ghahramani2016]. However, much of the work on formal guarantees for machine learning models has focused on non-Bayesian models, such as deep neural networks (NNs) [Huang et al.2017, Hein and Andriushchenko2017] and, to the best of our knowledge, there is no work directed at providing formal guarantees for the absence of adversarial local input perturbations in Bayesian prediction settings.
Gaussian processes (GPs) are a class of stochastic processes that are, due to their many favourable properties, widely employed for Bayesian learning [Rasmussen2004], with applications spanning robotics, control systems and biological processes [Sadigh and Kapoor2015, Laurenti et al.2017, Bortolussi et al.2018]
. Further, driven by pioneering work that first recognized the convergence of fully-connected NNs to GPs in the limit of infinitely many neurons[Neal2012], GPs have been used recently as a model to characterize the behaviour of NNs in terms of convergence analysis [Matthews et al.2018], approximated Bayesian inference [Lee et al.2017] and training algorithms [Chouza, Roberts, and Zohren2018].
In this paper we compute formal local robustness guarantees for Bayesian inference with GP priors. The resulting guarantees are probabilistic, as they take into account the uncertainty intrinsic in the Bayesian learning process and explicitly work with the a-posteriori output distribution of the GP. More specifically, given a GP model trained on a given data set, a test input point and a neighborhood around the latter, we are interested in computing the probability that there exists a point in the neighbourhood such that the prediction of the GP on the latter differs from the initial test input point by at least a given threshold. This implicitly gives guarantees on the absence of adversarial examples, that is, input samples that trick a machine learning model into performing wrong predictions.
Unfortunately, computing such a probability is far from trivial. In fact, given a compact set and the above measure reduces to computing the probability that there exists a function sampled from the GP such that there exists for which , where and is a metric norm. Since the set is composed of an infinite number of points, computing such a measure for general stochastic processes is extremely challenging. However, for GPs we can obtain tight upper bounds on the above probability by making use of inequalities developed in the theory of GPs, such as the Borell-TIS inequality [Adler and Taylor2009] and the Dudley’s Entropy Integral [Dudley1967]
. To do this, we need to obtain lower and upper bounds on the extrema of the a-posteriori GP mean and variance functions on neighborhoods of a given test point. We obtain these bounds by constructing lower and upper approximations for the GP kernel as a function of the test point, which are then propagated through the GP inference formulas. Then, safe approximations for these values are obtained by posing a series of optimization problems that can be solved either analytically or by standard convex optimization techniques. We illustrate the above framework with explicit algorithmic techniques for GPs built with squared-exponential and ReLU kernel.
Finally, we apply the methods presented here to characterize the robustness of GPs with ReLU kernel trained on a subset of images included in the MNIST dataset. Relying on the weak convergence between fully-connected NNs with ReLU activation functions and the corresponding GPs with ReLU kernel, we analyze the behaviour of such networks on adversarial images in a Bayesian setting. We use SIFT[Lowe2004] to focus on important patches of the image, and perform feature-level safety analyses of test points included in the dataset. We apply the proposed methods to evaluate the resilience of features against (generic) adversarial perturbations bounded in norm, and discuss how this is affected by stronger perturbations and different misclassification thresholds. We perform a parametric optimization analysis of maximum prediction variance around specific test points in an effort to characterize active defenses against adversarial examples that rely on variance thresholding. In the examples we studied, we have consistently observed that, while an increased number of training samples may significantly help detect adversarial examples by means of prediction uncertainty, the process may be undermined by more complex architectures.
In summary, the paper makes the following main contributions:
We provide tight upper bounds on the probability that the prediction of a Gaussian process remains close to a given test point in a neighbourhood, which can be used to quantify local robustness against adversarial examples.
We develop algorithmic methods for the computation of extrema of GP mean and variance over a compact set.
Relying on convergence between fully-connected NNs and GPs, we apply the developed methods to provide feature-level analysis of the behaviour of the former on the MNIST dataset.
Why probabilistic local robustness guarantees?
Our results provide formal local robustness guarantees in the sense that the resulting bounds are sound with respect to a neighbourhood of an input, and numerical methods have not been used. This enables certification for Bayesian methods that is necessary in safety-critical applications, and is in contrast with many existing pointwise approaches to detect adversarial examples in Bayesian models, which are generally based on heuristics, such as to reject test points with high uncertainty[Li and Gal2017, Feinman et al.2017]. We illustrate the intuition with the following simple example.
Let be a zero-mean stochastic process with values in . Consider the following widely used definition of safety for a set
where is a given threshold.
Assume that, for all , and are independently and equally distributed random variables such that for each
are independently and equally distributed random variables such that for eachwe have Then, if we compute the above property we obtain
Thus, even though at each point has relatively high probability of being safe, is still small. This is because safety, as defined in Eqn 1, depends on a set of points, and this must be accounted for to give robustness guarantees for a given stochastic model. Note that, to simplify, we used a discrtete set , but the same reasoning remains valid even if , as in this paper.
Existing formal approaches for machine learning models mostly focus on computing non-probabilistic local safety guarantees [Raghunathan, Steinhardt, and Liang2018, Huang et al.2017, Ruan, Huang, and Kwiatkowska2018] and generally neglect the uncertainty intrinsic in the learning process, which is intrinsic in a Bayesian model. Recently, empirical methods to detect adversarial examples for Bayesian NNs that utilise pointwise uncertainty have been introduced [Li and Gal2017, Feinman et al.2017]. However, these approaches can be fooled by attacks that generate adversarial examples with small uncertainty as shown in [Carlini and Wagner2017]. Unfortunately, obtaining formal guarantees for Bayesian NNs is challenging since their posterior distribution, which can be obtained in closed form for GPs, is generally analytically intractable [Gal and Ghahramani2016]. In [Grosse et al.2017b] attacks for Bayesian inference with Gaussian processes based on local perturbations of the mean have been presented.
Notions of safety for Gaussian processes have been recently studied in the context of system design for stochastic models (see, e.g. [Wachi et al.2018, Bartocci et al.2015, Sadigh and Kapoor2015, Sui et al.2015]). In [Sadigh and Kapoor2015], the authors synthesize safe controllers against Probabilistic Signal Temporal Logic (PrSTL) specifications, which suffer from the issue illustrated in Example 1. Another related approach is that in [Sui et al.2015], where the authors build on [Srinivas et al.2012]
and introduce SAFEOPT, a Bayesian optimization algorithm that additionally guarantees that, for the optimized parameters, with high probability the resulting objective function (sampled from a GP) is greater than a threshold. However, they do not give guarantees against perturbation of the synthesized parameters, i.e., will the resulting behaviour still be safe and close to the optimal if parameters close to the optimal, but for example corrupted by noise, are applied? Our approach allows one to quantify such a probability. We should also stress that, while it is often the case that the guarantees provided by existing algorithms are statistical (i.e., given in terms of confidence intervals), the bounds presented in this paper are probabilistic.
We consider a Gaussian process with values in and with a Gaussian probability measure such that, for any ,
is a multivariate normal distribution222In this paper we assume is a separable stochastic process. This is a standard and common assumption [Adler and Taylor2009]. The separability of guarantees that Problem 1 and 2 are measurable.. We consider Bayesian inference for . That is, as illustrated in detail in the next section, given a dataset we consider the process
which represents the conditional distribution of given the set of observations in . The first problem we examine is Problem 1, where we want to compute the probability that local perturbations of a given test point result in predictions that remain close to the original.
(Probabilistic Safety). Consider the training dataset . Let and fix For call
where is the i-th component of Then we say that component in is safe with probability for with respect to set and perturbation iff
Intuitively, we consider a test point and a compact set containing , and compute the probability that the predictions of remain close for each . We consider the components of the GP individually and with sign, enabling one-sided analysis. Note that is composed of an uncountable number of points, making the probability computation challenging. Problem 1 can be generalized to local invariance of with respect to a given metric (Problem 2 below). In the next section, for the corresponding solution, we will work with the norm, but all the results can be easily extended to any norm, including .
(Probabilistic Invariance) Consider the training dataset . Let and assume For metric and call
Then we say that is invariant with respect to metric for in and perturbation with probability iff
Probabilistic invariance, as defined in Problem 2, bounds the probability that each function sampled from remains within a distance of at most to the initial point. Note that both Problem 1 and 2 quantify the probability of how the output of a learning process changes its value in a set around a given test input point, which implicitly gives probabilistic guarantees for local robustness against adversarial examples. In the next section, in Theorem 1 and 2, we give analytic upper bounds for Problem 1 and 2. In fact, analytic distributions of the supremum of a GP, which would allow one to solve the above problems, are known only for a very limited class of GPs (and always for GPs evolving over time) [Adler and Taylor2009], making exact computation impossible. However, first, we illustrate the intuition behind the problems studied here on a GP regression problem.
We consider a regression problem taken from [Bach2009], where we generate 128 samples from a random two-dimensional covariance matrix, and define labels as a (noisy) quadratic polynomial of the two input variables. We train a GP with squared-exponential kernel on this dataset, using a maximum likelihood estimation of the kernel hyper-parameters [Rasmussen2004]. The mean and variance of the GP obtained after training are plotted in Figure 3, along with the samples used for training.
Consider the origin point , let and define . As is a saddle point for the mean function, variations of the mean around it are relatively small. Analogously, the variance function exhibits a flat behaviour around , meaning greater confidence of the GP in performing predictions around . As such we expect realizations of the GP to be consistently stable in a neighbourhood of , which in turn translates to low values for and where in and , to simplify the notation, we omit the dataset used for training. On the other hand, around the a-posteriori mean changes quickly and the variance is high, reflecting higher uncertainty. Hence, letting , we expect the values of and to be greater than those computed for .
In the next section we show how and can be computed to quantify the uncertainty and variability of the predictions around and
Since is a Gaussian process, its distribution is completely defined by its mean and covariance (or kernel) function Consider a set of training data and call Training in a Bayesian framework is equivalent to computing the distribution of given the dataset , that is, the distribution of the process
Given a test point and training inputs in
, consider the joint distribution, which is still Gaussian with mean and covariance matrix given by
is the covariance matrix relative to vectorThen, it follows that is still Gaussian with mean and covariance matrix defined as follows:
where Hence, for GPs the distribution of can be computed exactly.
Given two test points and , the above calculations can still be applied to compute the joint distribution
which represents the difference of at two distinct test points after training. It is straightforward to show that, given such that where
is the identity matrix of dimensionis Gaussian with mean and variance
is the distribution of how after training, changes with respect to two different test points. However, to solve Problem 1 and 2, we need to take into account all the test points in and compute the probability that in at least one of them exits from a given set of the output space. This is done in Theorem 1 by making use of the Borell-TIS inequality and of the Dudley’s entropy integral [Adler and Taylor2009, Dudley1967]. The above inequalities allow one to study Gaussian processes by appropriately defining a metric on the variance of the GPs. In order to define such a metric we call the GP with the same covariance matrix as but with zero mean and its -th component. For , a test point and we define the (pseudo-)metric by
where is the -th component of the zero-mean version of . Note that does not depend on Additionally, we assume there exists a constant such that for a compact and 333Note that here we work with the norm, but any other metric would work.
Now, we are finally ready to state the following theorem.
Assume is a hyper-cube with layers of length . For and let
Assume . Then, it holds that
where is the supremum of the component of the covariance matrix .
In Theorem 1 we derive upper bounds for . Considering that
is a Gaussian process, it is interesting to note that the resulting bounds still follow an exponential distribution. From Theorem1 we have the following result
Assume is a hyper-cube with layers of length . For let
For each assume . Then, it holds that
Note that in Theorem 1 and 2 we assume that is a hyper-cube. However, proofs of both theorems (reported in the Supplementary Materials) can be easily extended to more general compact sets, at a cost of more complex analytic expressions or less tight bounds. Both theorems require the computation of a set of constants, which depends on the particular kernel. In particular, and are upper bound of variance and mean a-posteriori while, for a test point and represent local Lipschitz constant and upper bound for in In the next section, we show how these constants can be computed.
We illustrate the upper bounds for and , as given by Theorem 1 and 2, on the GP introduced in Example 2. Figure 4 shows the values obtained for and on and for between and . We observe that values computed for are consistently greater than those computed for , which captures and probabilistically quantifies the increased uncertainty of the GP around , as well as the increased ratio of mean variation around it (see Figure 3). Notice also that values for are always smaller than the corresponding values. This is a direct consequence of the fact that probabilistic invariance is a stronger requirement than probabilistic safety, as defined in Problem 1, as the latter is not affected by variations that tend to increase the value of the GP output (translating to increased confidence in classification settings).
We introduce a general framework for the computation of the constants involved in the bounds presented in the previous section with an approach based on a generalisation of that of [Jones, Schonlau, and Welch1998] for squared-exponential kernels in the setting of Kriging regression. Namely, we assume the existence of a suitable decomposition of the kernel function as for all and , such that:
is a continuous function;
is differentiable, with continuous;
can be exactly computed for each and , .
While assumptions 1 and 2 usually follow from smoothness of the kernel used, assumption 4 depends on the particular defined. Intuitively, should represent the smallest building block of the kernel which captures the dependence on the two input points. For example for the squared exponential kernel this has the form of a separable quadratic polynomial so that assumption 3 is verified. Similarly, for the ReLU kernel can be defined as the dot product between the two input points.
Assumptions 1 and 2 guarantee the existence for every of a set of constants , , and such that:
In fact, it follows from those that has a finite number of flex points. Hence, we can iteratively find lower and upper approximation in convex and concave parts, and merge them together as detailed in the Supplementary Material. The key point is that, due to linearity, this upper and lower bound on the kernel can be propagated through the inference equations for Gaussian processes, so as to obtain lower and upper linear bounds on the a-posteriori mean and variance with respect to . Thanks to assumption 3, these bounds can be solved for optimal points exactly, thus providing formal lower and upper values on optimization over a-posteriori mean and variance of the Gaussian process in . The described approach can be used to compute , and . In the following subsection we give details for the computation of . We refer to the Supplementary Materials for the details for the other constants and for squared-exponential and ReLU kernels.
As is fixed, we have that , hence we need just to compute . Using Eqn (7) we can compute a lower and upper bound to this inferior, which can be refined using standard branch and bound techniques. Let , then by the inference formula for Gaussian processes and Eqn (7) we have that:
where we choose: Let be an inferior point for to the right-hand side Equation (we refer to the Supplementary Materials to show how this can be computed for squared-exponential and ReLU kernels), then, by the definition of inferior we have that:
The latter provide bounds on that can be used within a branch and bound algorithm for further refinement.
Performing inference with GPs has a cost that is where is the size of the dataset. Once inference has been performed the cost of computing upper and lower bounds for is where is a constant that depends on the particular kernel. For instance, for the squared-exponential kernel while for the ReLU kernel (Eqn (8)) where is the number of layers of the corresponding neural network. The computation of the bounds for requires solving a convex problem in variables, while and can be bounded in constant time. Refining the bounds with a branch and bound approach has a worst-case cost that is exponential in the dimension of the input space. Hence, sparse approximations, which mitigate the cost of performing inference with GP [Seeger, Williams, and Lawrence2003], are appealing.
Experimental Evaluation: Robustness Analysis of Deep Neural Networks
In this section we apply the methods presented above to GP defined with deep kernels, in an effort to provide a probabilistic analysis of adversarial examples. This analysis is exact for GPs, but only approximate for fully-connected NNs, by virtue of weak convergence of the induced distributions between deep kernel GPs and deep fully-connected NNs.
We focus on GPs with ReLU kernel, which directly correspond to fully-connected NNs with ReLU activation functions. Given the number of layers , the regularization parameters (prior variance on the weights) and (prior variance on the bias), the ReLU covariance between two input points is iteratively defined by the set of equations [Lee et al.2017]:
for , where .
We follow the experimental setting of [Lee et al.2017], that is, we train a selection of ReLU GPs on a subset of the MNIST dataset using least-square classification (i.e. posing a regression problem to solve the classification task) and rely on optimal hyper-parameter values estimated in the latter work. Note that the methods we presented are not constrained to specific kernels or classification models, and can be generalized by suitable modifications to the constant computation part. Classification accuracy obtained on the full MNIST test set varied between (by training only on 100 samples) to (training on 2000 samples). Unless otherwise stated, we perform analysis on the best model obtained using training samples, that is, a two-hidden-layer architecture with and .
For scalability purposes we adopt the idea from [Wicker, Huang, and Kwiatkowska2018, Ruan, Huang, and Kwiatkowska2018] of performing a feature-level analysis. Namely, we pre-process each image using SIFT [Lowe2004]. From its output, we keep salient points and their relative magnitude, which we use to extract relevant patches from each image, in the following referred to as features. We apply the analysis to thus extracted features. Unless otherwise stated, feature numbering follows the descending order of magnitude.
In the first row of Figure 5
we consider three images from the MNIST test data, and for each we highlight the first five features extracted by SIFT (or less if SIFT detected less than five features). For each image, feature and we consider the set of images given by the images differing from in only the pixels included in and by no more than for each pixel.
We plot the values obtained for as a function of for and , respectively, on the second and third row of Figure 5. Recall that represents the probability of finding such that the classification confidence for the correct class in drops by more than compared to that of . Since a greater value implies a larger neighborhood , intuitively will monotonically increase along with the value of . Interestingly, the rate of increase is significantly different for different features. In fact, while most of the 14 features analyzed in Figure 5 have similar values for , the values computed for some of the features using are almost double (e.g. feature 4 for the third image), and remains fairly similar for others (e.g. feature 3 for the first image). Also the relative ordering in robustness for different features is not consistent for different values of (e.g. features 2 and 5 from the first image). This highlights the need of performing parametric analysis of adversarial attacks, which take into account different strengths and misclassification thresholds, as suggested in [Biggio and Roli2017]. Finally, notice that, though only 14 features are explored here, the experiment shows no clear relationship between feature magnitude as estimated by SIFT and feature robustness, which calls for caution in adversarial attacks and defences that rely on feature importance.
Most active defences are based upon rejecting input samples characterized by high uncertainty values. After uncertainty is estimated, defences of this type usually proceed by setting a meta-learning problem whose goal is to distinguish between low and high variance input points, so as to flag potential adversarial examples [Grosse et al.2017b, Feinman et al.2017]. However, mixed results are obtained with this approach [Carlini and Wagner2017].
In this subsection we aim at analyzing how the variance around test samples changes with different training settings for the three test points previously discussed. We use the method developed for variance optimisation to compute:
that is, we look for the highest variance point in the neighbourhood of , and normalise its value with respect to the variance at . We use and perform the analysis only on feature 1 of each image.
Figure 6 plots values of as a function of the number of layers (from 1 to 10) and samples (from 100 to 2000) included in the training set. Firstly, notice how maximum values of are perfectly aligned with the results of Figure 5. That is, less robust features are associated with higher values of (e.g. feature 1 for image 1). This highlights the relationship between the existence of adversarial examples in the neighbourhood of a point and model uncertainty. We observe the normalised variance value to consistently monotonically increase with respect to the number of training samples used. This suggests that, as more and more training samples are input into the training model, the latter become more confident in predicting “natural” test samples compared to “artificial” ones. Unfortunately, as the number of layers increases, the value of decreases rapidly to a plateau. This seems to point to the fact that defence methods based on a-posteriori variance thresholding become less effective with more complex neural network architectures, which could be a justification for the mixed results obtained so far using active defences.
In this paper we presented a formal approach for safety analysis of Bayesian inference with Gaussian process priors with respect to adversarial examples and invariance properties. As the properties considered in this paper cannot be computed exactly for general GPs, we compute their safe over-approximations. Our bounds are based on the Borell-TIS inequality and the Dudley entropy integral, which are known to give tight bounds for the study of suprema of Gaussian processes [Adler and Taylor2009]. On examples of regression tasks for GPs and deep neural networks, we showed how our results allow one to quantify the uncertainty associated to a given prediction, also taking into account of local perturbations of the input space. Hence, we believe our results represent a step towards the application of Bayesian models in safety-critical applications.
- [Adler and Taylor2009] Adler, R. J., and Taylor, J. E. 2009. Random fields and geometry. Springer Science & Business Media.
- [Bach2009] Bach, F. R. 2009. Exploring large feature spaces with hierarchical multiple kernel learning. In Advances in neural information processing systems, 105–112.
- [Bartocci et al.2015] Bartocci, E.; Bortolussi, L.; Nenzi, L.; and Sanguinetti, G. 2015. System design of stochastic models using robustness of temporal properties. Theoretical Computer Science 587:3–25.
- [Biggio and Roli2017] Biggio, B., and Roli, F. 2017. Wild patterns: Ten years after the rise of adversarial machine learning. arXiv preprint arXiv:1712.03141.
- [Bortolussi et al.2018] Bortolussi, L.; Cardelli, L.; Kwiatkowska, M.; and Laurenti, L. 2018. Central limit model checking. arXiv preprint arXiv:1804.08744.
Carlini, N., and Wagner, D.
Adversarial examples are not easily detected: Bypassing ten detection
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 3–14. ACM.
- [Chouza, Roberts, and Zohren2018] Chouza, M.; Roberts, S.; and Zohren, S. 2018. Gradient descent in gaussian random fields as a toy model for high-dimensional optimisation in deep learning. arXiv preprint arXiv:1803.09119.
- [Dreossi, Donzé, and Seshia2017] Dreossi, T.; Donzé, A.; and Seshia, S. A. 2017. Compositional falsification of cyber-physical systems with machine learning components. In NASA Formal Methods Symposium, 357–372. Springer.
- [Dudley1967] Dudley, R. M. 1967. The sizes of compact subsets of hilbert space and continuity of gaussian processes. Journal of Functional Analysis 1(3):290–330.
- [Feinman et al.2017] Feinman, R.; Curtin, R. R.; Shintre, S.; and Gardner, A. B. 2017. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410.
- [Gal and Ghahramani2016] Gal, Y., and Ghahramani, Z. 2016. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, 1050–1059.
- [Grosse et al.2017a] Grosse, K.; Manoharan, P.; Papernot, N.; Backes, M.; and McDaniel, P. 2017a. On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280.
- [Grosse et al.2017b] Grosse, K.; Pfaff, D.; Smith, M. T.; and Backes, M. 2017b. How wrong am i?-studying adversarial examples and their impact on uncertainty in gaussian process machine learning models. arXiv preprint arXiv:1711.06598.
Hein, M., and Andriushchenko, M.
Formal guarantees on the robustness of a classifier against adversarial manipulation.In Advances in Neural Information Processing Systems, 2263–2273.
- [Huang et al.2017] Huang, X.; Kwiatkowska, M.; Wang, S.; and Wu, M. 2017. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, 3–29. Springer.
- [Jones, Schonlau, and Welch1998] Jones, D. R.; Schonlau, M.; and Welch, W. J. 1998. Efficient global optimization of expensive black-box functions. Journal of Global optimization 13(4):455–492.
- [Laurenti et al.2017] Laurenti, L.; Abate, A.; Bortolussi, L.; Cardelli, L.; Ceska, M.; and Kwiatkowska, M. 2017. Reachability computation for switching diffusions: Finite abstractions with certifiable and tuneable precision. In Proceedings of the 20th International Conference on Hybrid Systems: Computation and Control, 55–64. ACM.
- [Lee et al.2017] Lee, J.; Bahri, Y.; Novak, R.; Schoenholz, S. S.; Pennington, J.; and Sohl-Dickstein, J. 2017. Deep neural networks as gaussian processes. arXiv preprint arXiv:1711.00165.
- [Li and Gal2017] Li, Y., and Gal, Y. 2017. Dropout inference in bayesian neural networks with alpha-divergences. arXiv preprint arXiv:1703.02914.
Lowe, D. G.
Distinctive image features from scale-invariant keypoints.
International journal of computer vision60(2):91–110.
- [Matthews et al.2018] Matthews, A. G. d. G.; Rowland, M.; Hron, J.; Turner, R. E.; and Ghahramani, Z. 2018. Gaussian process behaviour in wide deep neural networks. arXiv preprint arXiv:1804.11271.
- [Neal2012] Neal, R. M. 2012. Bayesian learning for neural networks, volume 118. Springer Science & Business Media.
- [Raghunathan, Steinhardt, and Liang2018] Raghunathan, A.; Steinhardt, J.; and Liang, P. 2018. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344.
- [Rasmussen2004] Rasmussen, C. E. 2004. Gaussian processes in machine learning. In Advanced lectures on machine learning. Springer. 63–71.
- [Ruan, Huang, and Kwiatkowska2018] Ruan, W.; Huang, X.; and Kwiatkowska, M. 2018. Reachability analysis of deep neural networks with provable guarantees. arXiv preprint arXiv:1805.02242.
- [Sadigh and Kapoor2015] Sadigh, D., and Kapoor, A. 2015. Safe control under uncertainty. arXiv preprint arXiv:1510.07313.
- [Seeger, Williams, and Lawrence2003] Seeger, M.; Williams, C.; and Lawrence, N. 2003. Fast forward selection to speed up sparse gaussian process regression. In Artificial Intelligence and Statistics 9.
- [Seshia, Sadigh, and Sastry2016] Seshia, S. A.; Sadigh, D.; and Sastry, S. S. 2016. Towards verified artificial intelligence. arXiv preprint arXiv:1606.08514.
- [Srinivas et al.2012] Srinivas, N.; Krause, A.; Kakade, S. M.; and Seeger, M. W. 2012. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory 58(5):3250–3265.
- [Sui et al.2015] Sui, Y.; Gotovos, A.; Burdick, J.; and Krause, A. 2015. Safe exploration for optimization with gaussian processes. In International Conference on Machine Learning, 997–1005.
- [Szegedy et al.2013] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
- [Wachi et al.2018] Wachi, A.; Sui, Y.; Yue, Y.; and Ono, M. 2018. Safe exploration and optimization of constrained mdps using gaussian processes. In AAAI Conference on Artificial Intelligence.
- [Wicker, Huang, and Kwiatkowska2018] Wicker, M.; Huang, X.; and Kwiatkowska, M. 2018. Feature-guided black-box safety testing of deep neural networks. In International Conferfence on Tools and Algorithms for the Construction and Analysis of Systems, 408–426. Springer.
Appendix A Supplementary Materials
In what follows we report the supplementary material of the paper. We first report the proofs of the main results and then give further details of the algorithmic framework we develop to compute the constants required in Theorem 1 and 2. Finally, we give details for the case of the squared-exponential kernel and ReLu kernel.
Proof of Theorem 1
where is the zero mean Gaussian process with same variance of The last inequality can be bound from above using the following inequality, called Borell-TIS inequality [Adler and Taylor2009].
(Borell-TIS inequality) Let a zero-mean unidimensional Gaussian process with covariance matrix . Assume . Then, for any it holds that
In order to use the Borell-TIS inequality we need to bound from above , the expectation of the supremum of . For Gaussian processes we can use the Dudley’s entropy integral [Adler and Taylor2009], which guarantees that
where is the smallest number of balls of radius according to metric that completely cover (see [Adler and Taylor2009] for further details). For a hyper-cube of dimension , in order to compute we first need to compute , the number of covering balls of diameter of under norm. As the largest hyper-cube contained inside a sphere of diameter has a side of length we obtain
Now we know that for
Thus, this implies that all the points inside a ball of radius will have a distance in the d metric smaller or equal than . Thus, the number of covering balls of radius for , according to pseudo-metric is upper-bounded by
Proof of Theorem 2
Last term can be bounded by using the Borell-TIS inequality and Dudley’s entropy integral, as shown in the proof of Theorem 1.
Let be a symmetric and positive definite matrix and . Then, it holds that
be the diagonal matrix with diagonal entries the eigenvalues of. Then, there exists an ortonormal matrix such that . Consider the vectors and . Then, we obtain
which is greater than zero as all the eigenvalues of are positive by definition of positive definite matrix. ∎
Appendix B Constants Computation
Lower and Upper bound to Kernel Function
In this subsection we describe a method for computing lower and linear approximation to the kernel function. Namely, given and , in this we show how to compute , such that:
Notice that the same techniques can be used to find and coefficients of an upper-bound, simply by considering . Let and be maximum and minimum values of for , and consider the univariate and unidimensional function . We can then compute and by using the methods described below.
If happens to be concave function, than by definition of concave function, a lower bound is given by the line that links the points and .
If on the other hand happens to be a convex function, than by definition of convex function, a valid lower bound is given by the tangent line in the middle point of the interval.
Assume now, that is concave in , and convex in , for a certain (the same line of arguments can be used by reversing convexity and concavity). Let , coefficients for linear lower approximation in and , analogous coefficients in (respectively computed as for Case 1 and 2), and call and the corresponding functions. Define to be the linear function of coefficients and that goes through the two points and . We then have that is a valid linear lower bound in in fact:
if : in this case we have that , and . Hence