On Lipschitz Bounds of General Convolutional Neural Networks

Many convolutional neural networks (CNNs) have a feed-forward structure. In this paper, a linear program that estimates the Lipschitz bound of such CNNs is proposed. Several CNNs, including the scattering networks, the AlexNet and the GoogleNet, are studied numerically and compared to the theoretical bounds. Next, concentration inequalities of the output distribution to a stationary random input signal expressed in terms of the Lipschitz bound are established. The Lipschitz bound is further used to establish a nonlinear discriminant analysis designed to measure the separation between features of different classes.



page 13

page 14


On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond

Our paper proposes a generalization error bound for a general family of ...

Fast Accurate Method for Bounding the Singular Values of Convolutional Layers with Application to Lipschitz Regularization

This paper tackles the problem of Lipschitz regularization of Convolutio...

Lipschitz widths

This paper introduces a measure, called Lipschitz widths, of the optimal...

Convolutional Neural Networks Demystified: A Matched Filtering Perspective Based Tutorial

Deep Neural Networks (DNN) and especially Convolutional Neural Networks ...

Lifted Regression/Reconstruction Networks

In this work we propose lifted regression/reconstruction networks (LRRNs...

MaxGain: Regularisation of Neural Networks by Constraining Activation Magnitudes

Effective regularisation of neural networks is essential to combat overf...

On the stability of deep convolutional neural networks under irregular or random deformations

The problem of robustness under location deformations for deep convoluti...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Convolutional neural networks (CNN’s) have proved to be an effective tool in various image processing tasks. The convolutional layers at different levels are capable of extracting different details from images. As a feature extractor, a CNN is stable to small variations from the input and therefore performs well in a variety of classification, detection and segmentation problems.

The scattering transform [1, 2] is a special type of CNN that can be represented with a multilayer structure (thus also called a scattering network). Although the filters are designed wavelets rather than learned, the scattering transform proves to be an effective feature extractor. In the mathematical analysis of scattering network, it is proved [1, Theorem 2.10] that the scattering transform is invariant to translation. However, this is true only if we take the full representation where the limiting scale . In practice, we take a finite and therefore only have stability with respect to translation. The mathematical analysis for the stability properties of scattering networks is not limited to wavelets: for instance, it is generalized by using semi-discrete frames as filters in [3, 4], and time-frequency atoms as filters in [5]. In all these cases, the scattering transforms are Lipschitz continuous with Lipschitz constant , which is an important factor for the provable stability properties.

A scattering network extracts features from every convolutional layer. This is not the case for a general CNN. In [6] a CNN is defined as a neural network which has at least one convolution unit. Many widely-adapted CNN models have either a sequential structure (e.g. the AlexNet [7]) or a more complex feed-forward structure (e.g. the GoogleNet [8]). For those models, stability is still an important issue. Intuitively, keeping the same energy in the feature, we should train the network so that the features are as stable as possible to small perturbations before using dense layers to do the classification. In [9], the authors use the large Lipschitz bound of each single layer to illustrate that the AlexNet could be very unstable with respect to small perturbation on the input image. In fact, changing a small number of pixels could “fool” the network so that it produces wrong classification results. In general, a small Lipschitz bound of the entire transform implies the robustness of a CNN to small perturbations.

“Fooling” networks is naturally connected to adversarial networks. Indeed, Lipschitz bounds are already used in training adversarial networks other than just quantitatively showing the robustness. In [10], the authors propose an objective function for training generative adversarial networks where they use (the distance between) the Lipschitz constant (and ) as a penalty term. However, there is no direct way to impose it. Later in [11], the authors use a gradient penalty inspired by the fact that a function is -Lipschitz if its gradient is bounded by .

Although it plays an important role in deep learning, the study of Lipschitz bounds is not completely addressed by existing literature. The frameworks in

[1][5] analyze the -Lipschitz transformations but are limited to the scattering transforms and do not generalize automatically to general CNN’s. [9] provides a Lipschitz bound using the product of Bessel bounds of each layer, but in general lacks tightness for non-sequential models such as the scattering network. Our paper fills in the gap between these approaches, by providing a unified stability analysis that applies to both the scattering networks (as in [1][5]

) and to the more general convolutional networks. Our framework is flexible and compatible with architectures that may or may not generate outputs from hidden layers. The results presented in this paper are optimal for scattering networks and in general tighter than taking the product of Bessel bounds in each layer. Our focus is on estimation of these Lipschitz bounds, and how they relate to stochastic processes. We discuss how the Lipschitz bounds can be used for classification, but we do not focus on extending these results to generative adversarial networks. Instead we study numerically a few examples, including the AlexNet and the GoogleNet. Surprisingly, when applied to the AlexNet (and GoogleNet), we discovered that while the estimated bounds are about three orders of magnitude more conservative than the numerically estimated Lipschitz bounds, the empirical bounds are still three orders of magnitude smaller. Specifically, the largest local Lipschitz bound is obtained numerically to be of order 1, whereas on an extensive study using ImageNet

[12] images, the ratio between the energy of output variation to the energy of input variation is of the order .

We first overview the CNN architecture considered in this paper (the details are given in the main test). The framework is applicable to the scattering network [1, 2], the AlexNet [7] and the GoogleNet [8]

. It can also be used to analyze models such as Long-Short Term Memory


. We state the theory for continuous signals, but explain how to adapt it for the discrete case (which is the case for AlexNet and GoogleNet). We focus on the feature extraction part of the network and do not discuss the fully connected layers that are usually put on top of the structure, though the fully connected layers can be regarded as a special case of convolutional layers. The CNN that we consider has a feed-forward structure and consists of different layers (it is possible to use infinitely many layers to represent a feedback structure). We define the layers according to the convolutions. Specifically, each layer consists of input nodes, convolutional filters, detection / merge operations, pooling filters, output (feature) nodes and (hidden) output nodes.

  • The input nodes are signals passed to the current layer. That could come from the hidden output nodes in the previous layer, or the input signal to the network.

  • The convolutional filters are the filters that perform convolution with the signal from the input nodes. Suppose is the signal in an input node, and is the convolutional filter, the output is

  • The pooling filters

    are low-pass convolutional filters that lower the complexity before the feature is extracted as output. Note that these are still linear translation-invariant operations which are commonly used in scattering networks. The nonlinear operations such as max pooling and average pooling are contained in the detection operations.

  • The (feature) output nodes

    are outputs of the convolutional neural network. As we specified earlier, these nodes form a subset of the representation. Once the representation is extracted, the specific machine learning tasks, such as classification and prediction, will be performed on the representation.

  • The dilation operations are “changes of scale” on the space variables. A dilation operation on a signal , , can be represented using a invertible matrix . The dilated signal is .

  • The detection operations are nonlinear operations that apply pointwise to the output of the convolutional filters. The nonlinearities have Lipschitz constant

    (e.g. ReLU functions). In addition to applying the nonlinearity, the outputs can be aggregated by

    merge operations to produce a single output for dimensionality reduction. The max pooling and average pooling are modeled in this manner.

  • The (hidden) output nodes are signals that propagate to the next layer. The signals at the output nodes are identical to those at the input nodes of the next layer.

Fig. 1: The structure of a network layer. The network we consider consists of a number of layers, which makes the structure “deep”.

In this paper, unless otherwise specified, we use to denote the input and output signals of a CNN, to denote the hidden features, and to denote filters. The input signal on the -dimensional Euclidean space has finite energy, that is,

. The Fourier transform of

, denoted by , is defined formally to be

and we refer the readers to [14] for rigorous definitions for when or when is a generalized function. The filters of CNN are taken from the Banach Algebra of tempered distributions with an essentially bounded Fourier Transform, that is,


We have a detailed discussion of this algebra in Appendix C. We use to denote the -norm corresponding to the Lebesgue integral. For a matrix , denotes its transpose, and denotes its conjugate transpose. We use to denote the operator norm of , to denote its nuclear norm, and to denote its Frobenius norm.

The paper is organized as follows. Section II sets up the mathematical problem by defining the layers of a CNN. Section III states the results on estimating the Lipschitz bounds. Section IV illustrates examples from the scattering network to the AlexNet and the GoogleNet. Section V discusses how the Lipschitz bounds relate to concentration results for stationary processes on CNN’s. Section VI discusses using the Lipschitz bounds to construct a nonlinear discriminant.

Ii Defining a CNN

The overall structure of an -layer CNN is illustrated in Figure 2. The picture shows how an input propagates through the layers while generating outputs at each layer. The details of the layers are described in the following two subsections. If no merge operation is present at a certain layer, the convolutional layer is modeled as a linear operation followed by nonlinearity; if there are merge operations, different types of merge operations are modeled separately.

Fig. 2: The detail of an -layer CNN. The signals at output nodes are identical as at input nodes in the next layer. There may or may not be output generation in each layer.

Ii-a A layer without merge operations

If a certain layer does not contain any merging, we can model the filters as a linear transform from signals on all the input nodes. In the

-th layer, the set of input nodes is denoted by , , , and the set of output nodes by , , , . Further, the set of output generating nodes is denoted by , , , . With this notation, let , , , be the signals on the input nodes, a linear operator is a -by- array of filters in such that

is received before downsampled by the -by- invertible matrix and sent into a nonlinearity to output

Fig. 3: The detail of the m-th layer with no merge operations. denote the input nodes, denote the hidden output nodes, denote the feature output nodes. denote the pooling filters, denote the dilation factors, and denote the 1-Lipschitz nonlinearities. The notations in blue represent the signals at each node. denote the input signals of the layer. denote the hidden output signals that are passed to the next layers. denote the signals received after passing the linear operator . denote the signals at the feature output nodes.

For the -th layer, we define three types of Bessel bounds as follows. For each , denote to be the matrix that contains the Fourier transform of at , for , . Also for each , denote to be the diagonal matrix that has , the Fourier transform of the convolutional filter at , as its entry. Let be the diagonal matrix with as its entry. The 1st type Bessel bound for the -th layer is defined to be


the 2nd type Bessel bound for the -th layer is defined to be


and the 3rd type Bessel bound is defined to be


In general, the Bessel bound quantifies how the energy is magnified by convolution. The bound is finite if the filters form semi-discrete frames (see [4, Appendix A]). Our definition acts in the spectral domain and it naturally yields estimates of the the Lipschitz bounds: see Appendix A, (31). The need for three types of Bessel bounds is related to different types of energy mixing: input-to-combined hidden and feature output nodes, input-to-hidden output nodes, and input-to-feature output nodes. Intuitively, is the Bessel bound for the frame composed of both and , is for the frame of and is for the frame of only. For a layer with merge operations, the Bessel bounds share the same intuition, but their estimates have different mathematical representations. We describe that in the next section.

Ii-B A layer with merge operations

There are three types of merging. Type I takes inputs from channels, applies a nonlinearity function respectively, and then sums them up. That is, the output is


Type II takes inputs from channels, apply a nonlinearity on each signal, and then aggregates them by a pointwise -norm. That is, the output is




Type III takes inputs from channels, apply a nonlinearity on each signal, and then performs a pointwise multiplication. The nonlinearity should satisfy for each . The output is

Fig. 4: The three types of merge. Left: Type I - taking sum of the inputs; middle: Type II - taking -norm aggregation of the inputs; right: Type III - taking pointwise product of the inputs.

We point out that the standard pooling operations in most discrete CNN’s can be modeled in the continuous case by these merge operations. Specifically, max pooling is the operation of taking the maximal element among those in the same sub-regions. We can use translations and dilations to separate elements in a sub-region to distinct channels, as illustrated in Figure 4(a). Then the -aggregation select the largest element and performs the max pooling. Average pooling replaces “taking the max” by “taking the average”. Similarly to max pooling, it can be done by taking the sum as illustrated in Figure 4(b). A concrete example illustrates max pooling as implemented by this framework. Similar implementation can realize average pooling. Consider the finite signal in Figure 6

for which we want to apply max pooling with size = 2 and stride = 2. Then the max pooled signal is

, where each entry is the larger value within each pair. Consider now the (circular) translation by 1 pixel of the first signal, that is together with the original signal (the middle two signals in the figure). Apply the dilation operator where we discard the second pixel in each consecutive pair of pixels. Thus we obtain and respectively. Now a Type II aggregation with selects the larger value between two pixels at the same position, and therefore results in , which is the same as the max pooling operation applied on the original signal.

(a) max pooling
(b) average pooling
Fig. 5: In the continuous case, the max pooling is modeled as Type II aggregation for , and the average pooling is modeled as Type I aggregation.
Fig. 6: A concrete example for the max pooling.

Suppose there are nodes in the -th layer (this works for but is a similar case in which there is no hidden output node). The set of these input nodes is denoted by . Within the layer, each node is connected to several filters. The filter can be either a pooling filter, or a convolutional filter. Associated with for , the pooling filter is denoted to be , and the convolutional filters to be . The set of filters in the -th layer is thus


Each filter

is naturally classified into one of three categories according to the three types of merging: if a filter is merged using Type I operation, then it is classified as a Type I filter; in the same manner we define Type II and Type III filters. If a filter is not merged with other filters, we classify it as Type I (with

in the first picture in Figure (4)). We denote the sets of all Type-I, II, III filters by , respectively.

Note that each filter is associated with one and only one output node. Let denote the set of output nodes of the -th layer. Note that and there is a one-one correspondence between and . The output nodes automatically divide into disjoint subsets , where is the set of filters merged into . Further, denote the set of output generating nodes. The detail of one layer is illustrated in Figure 7.

Fig. 7: The detail of one layer with merging. denote the input nodes, denote the output nodes, denote the output generating nodes. and denote the filters, denote the dilation factors. denote the 1-Lipschitz nonlinearities (for illustration we put them outside the merge box, but they belong to the merge operations where we defined the three types of merge). The notations in blue represent the signals at the nodes. denote the input signals of the layer. denote the output signals that are passed to the next layers. denote the signals at the feature output nodes.

For each filter , we define the associated multiplier in the following way: suppose , let denote the cardinality of . Then


We define the 1st type Bessel bound for the node to be


the 2nd type Bessel bound to be


and the 3rd type Bessel bound to be


Further, we define the 1st type Bessel bound for the -th layer to be


the 2nd type Bessel bound to be


and the 3rd type Bessel bound to be


Iii Calculating the Lipschitz bound

Suppose we are given with a CNN within the framework given in Section II. For any input signal and , let be the output for from the node , and be the output for from the node . Let be the collection of all output generating nodes. We say is a Lipschitz bound for the CNN if


The map induced by the CNN is defined by


A norm defined on by

is well defined and is a Lipschitz constant in the sense that


We have the following theorem for calculating the Lipschitz bound.

Theorem III.1.

Consider a CNN in the framework of Section II, with layers and in the -th layer it has 1st type Bessel bound , 2nd type Bessel bound and 3rd type Bessel bound . Then the CNN induces a nonlinear map that is Lipschitz continuous, and its Lipschitz bound is given by the optimal value of the following linear program:


The proof of Theorem III.1 is given in Appendix A. We remark here that the linear program presented as (20) is feasible, since one obvious feasible point is for and for . Moreover, the solution is bounded since all ’s are bounded by according to the third and fourth inequalities in (20). In practice, either the simplex method or the interior method (see, for instance [15, Chapter 13-14]) can be used to solve this linear program, and they run in polynomial time with respect to the number of layers. If we are in the discrete case, say for pixel images, then we need to compute the Bessel bounds, which relies on the Fast Fourier Transforms that grows as with the dimensionality of filters. Although the complexity is not high, a Lipschitz bound computed via a linear program is still not intuitive. We give more explicit estimates of the Lipschitz bound in the following corollaries.

Corollary III.2.

Consider a CNN in the framework of Section II, with layers and in the -th layer it has 1st type Bessel bound . Then the CNN induces a nonlinear map that is Lipschitz continuous, and its Lipschitz bound is given by

Corollary III.3.

Consider a CNN in the framework of Section II, with layers and in the -th layer it has 2nd type Bessel bound and generating bound . Then the CNN induces a nonlinear map that is Lipschitz continuous, and its Lipschitz bound is given by


The proof of Corollary III.3 is an immediate consequence of Theorem III.1, specifically from the third and fourth inequalities of (20). The proof of Corollary III.2 is given in Appendix B. We remark here that both corollaries give a more conservative bound compared to the linear program (20) because both results restrict the variables to a subset of the feasible region. The idea of using Bessel bounds is also addressed in [9] where the authors compute the Bessel bounds of each layer of the AlexNet, and in [4] where the authors set to make the CNN a 1-Lipschitz map. We return to the AlexNet in the following section.

Subject to the knowledge of the three types of Bessel bounds in each layer, the estimate given by the linear program (20) is tight. However three issues may prevent its tightness. First, except for the scattering network when defined for continuous inputs, most of CNN’s consider discrete time inputs only. Second, even subject to the same Bessel bounds, different filters may produce much smaller Lipschitz bounds. Sub-optimality occurs in cases where the signal that achieves the Bessel bound for Layer is not in the range of Layer . Third, in some practical applications when signals are modeled as samples drawn from certain distributions, then the emphasis is on local stability around the operating distributions, whereas the global Lipschitz bound may be irrelevant.

We address these issues by looking at three examples: the scattering network, a toy network that includes all three types of merge operations we consider in this paper, and the well-known AlexNet and GoogleNet.

Iv Examples

Iv-a Scattering network

The scattering network in [1, 2] is a 1-Lipschitz map. In each layer the filters are designed to form wavelet orthonormal bases using multi-resolution analysis. Such design leads to , for all . Then Corollary III.2 simply yields a Lipschitz bound which is tight. We refer the readers to [16, Section 4.1] for a detailed discussion.

Iv-B A toy example that contains merge operations

The scattering network enjoys for all since it is tightly related to wavelet decompositions. In many CNN’s we don’t have feature output from hidden layers and therefore , whence the results in Corollary III.2 coincide with the optimal value by the linear program (20). However, Corollary III.2 can be suboptimal. To see this, we take a toy example of CNN that contains merge operations. The same network structure appears also in [16] with different filter weights. The parameter is set to .

Figure 8 is an illustration of the CNN. According to Appendix C, we can translate it into a CNN within our framework, as illustrated in Figure 9.

Fig. 8: The toy example that also appears in [16]. Note that we have different choices of filters in the numerical experiment.
Fig. 9: Equivalent representation of the CNN. We illustrate the four layers of the network.

Define the smooth “gate” function on the Fourier domain supported on as


With this, we define the Fourier transforms of the filters to be gate functions

Fig. 10: Illustration of the filters , and in the frequency domain. Note that they are all smooth functions.

Table I lists the Bessel bounds for all the layers. The optimal value of the linear program (20) gives a Lipschitz bound of ; the Lipschitz bound as derived in Corollary III.2 is ; Corollary III.3 gives an estimate of the Lipschitz bound of . We see that the output of the linear program (20) is more optimal than the product given in Corollary III.2 and III.3.

1 2 3 4
2 1
1 1 2 0
1 1 1 1
TABLE I: The Bessel bounds of the example in Figure 8.

Iv-C AlexNet and GoogleNet

In this subsection we analyze the Lipschitz properties of AlexNet and GoogleNet. First, we apply the analytical results derived in earlier sections to these networks and compare their results to empirical estimates. To accomplish this, we need to extend the theory hitherto developed to processing of discrete signals. Second, we construct a local Lipschitz analysis theory and explain the gap between the analytical and empirical estimates. In this process, we obtain additional information on local stability and robustness of the network, which we exploit in the third part of this subsection where we apply these results to adversarial perturbations.

Iv-C1 Extending to Discrete Signal Processing

The AlexNet and the GoogleNet have filters trained on specific datasets, with no closed form parametric description of their weights such as the wavelets. Therefore, instead of using approximations of the continuous signal theory to these networks, we extend the theory to discrete signal processing. We do this by computing the Bessel bounds using the Discrete Fourier Transform along the lines in [9, Section 4.3] subsequent to the computation of the operator norms of the discrete linear operators. Given the Bessel bound, the Lipschitz bound is computed using the same estimates derived earlier to the case of continuous signals.

First we compute the Bessel bounds. Note that both networks do not generate feature outputs in hidden layers. Therefore, , for each . Now the fourth line of (20) forces for , which makes the second and third lines equivalent. Note that Corollary III.2 is derived by looking at the third and fourth lines of (20). Consequently, the linear program (20) and Corollary III.2 provide the same Lipschitz bounds .

There are several versions of trained networks for AlexNet and GoogleNet. We consider the MatConvNet [17] pretrained networks that are trained using ImageNet (ILSVRC2012) dataset [12] (the trained networks for both AlexNet and GoogleNet are retrievable at http://www.vlfeat.org/matconvnet/pretrained/). For both pretrained models, there are no cross-channel response normalizations (which appears in the original model [7]). The features are extracted after the last convolution layer in each network.

We present the Bessel bounds ( for and for ) for each layer of the AlexNet in Table II and the GoogleNet in Table III. Since we are in the discrete case (previous sections discuss signals and continuous convolutions), we need to adjust the way the Bessel bounds in (3) and (4) are computed. The adjusted computations for the AlexNet follows [9, Section 4.3]

, which uses the Discrete Fourier Transform and takes striding into account. For the GoogleNet, we treat the inception modules (see

[8, Figure 2(b)]) as two layers: the first layer is the scattering with the dimension reductions (denoted as “icpxreduce” in Table III), and the second layer is the merging after taking convolutions (denoted as “icpxconv” in Table III).

Layer Lip const
conv1 0.2628
conv2 6.7761
conv3 6.5435
conv4 13.3898
conv5 16.0937
TABLE II: The Bessel constants (= square root of Bessel bounds) for each layer of AlexNet.
Layer Lip const
conv1 5.8608
reduce2 3.4147
conv2 3.0309
icp1reduce 3.6571
icp1conv 5.2917
icp2reduce 3.7994
icp2conv 7.6367
Layer Lip const
icp3reduce 2.6642
icp3conv 6.0129
icp4reduce 2.6403
icp4conv 5.1029
icp5reduce 2.9825
icp5conv 5.5389
icp6reduce 3.1758
Layer Lip const
icp6conv 6.7737
icp7reduce 2.2093
icp7conv 6.5312
icp8reduce 2.2947
icp8conv 5.5561
icp9reduce 2.8567
icp9conv 7.0353
TABLE III: The Bessel constants (= square root of Bessel bounds) for each layer of GoogleNet.

Using the computed Bessel constants and Corollary III.2, the estimated Lipschitz constant for the AlexNet is and for the GoogleNet is . Subject to the Bessel computed computed above (which are tight), the CNN Lipschitz bound estimates cannot be improved analytically. Instead we perform an empirical study. Specifically, we randomly take two images and from ImageNet, and compute the ratio , where is the Lipschitz map induced by the network. The empirical Lipschitz constant is the largest ratio among all samples that we take. We sample pairs for this experiment. The resulting empirical constant is for the AlexNet, and for the GoogleNet.

The empirical constants are of significantly smaller order than the analytical constants. In general, two factors explain the gap between the analytical and empirical Lipschitz constant estimates: first, the principal singular vector that optimizes the operator norm in a given layer in not in the range of signals reachable by the previous layer; second, whenever we have ReLU nonlinearity and max pooling, the distance between two vectors tends to shrink.

The first factor can be partially addressed by considering the norm of tensorial product of all layers instead of considering the product of tensor norms in each layer individually (similar to computing the operator norm of a product of matrices directly instead of upper bounding it by the product of operator norms of each matrix). Both this and the second factor can be addressed by a framework that locally linearizes the network for analysis. We demonstrate how to do this in the following subsection.

Iv-C2 Local Lipschitz Analysis

To begin with, we estimate the Lipschitz constants without the ReLU functions to illustrate the impact of the nonlinearity. We construct the AlexNet and GoogleNet without the ReLU units by replacing them with the identity functions, and repeat the experiments of taking ratios from pairwise random samples. Empirically, the ratio estimated in this way is for the AlexNet and for the GoogleNet. Note that these constants are larger than the empirical Lipschitz constants for the networks with the ReLU units.

The nonlinearities also have a non-negligible impact on the Lipschitz constant. To handle them in the analysis, we linearize them locally and compute the local Lipschitz constants. The local Lipschitz constant of at for -neighborhood is defined by


Note that for the case of the AlexNet and the GoogleNet (and similarly for all other discrete networks), the input signal is from a compact domain where is the interval for the pixel values, and is the dimensionality (the number of pixels). Since is convex, the Lipschitz constant of is the maximum of the local Lipschitz constants on . The rigorous proof of this claim is given in Appendix D.

Using the linearization formulas, we estimate numerically the local Lipschitz constants. The procedure is described as follows. We vectorize 111For a matrix where are -dimensional vectors, we vectorize to be the input image and the output feature vector. Also, we use Toepliz matrices to represent filters in each layer. For any input sample , the CNN generates the output feature vector by propagating through ’s and the nonlinearities that activate only a subset of the pixels for the hidden layer outputs. For the -th layer, we delete (remove) the rows that correspond to the pixels not activated by the ReLU units and max pooling (if they exist) in , and the corresponding columns in . In this way we obtain matrices . The product represents the locally linearized operator for the CNN acting at . For a small , the local Lipschitz constant at is estimated by

, the largest singular value of

. The Lipschitz constant for is thus estimated by

where the second equality follows if we take the maximum over the entire compact convex set (see Appendix D). However, for numerical reasons, we replace with a finite number of samples thus obtaining an approximate (lower) bound:

We follow the procedure described above to estimate the Lipschitz constant for the AlexNet, with having 500 random samples drawn from the ImageNet (ILSVRC2012) dataset. Figure 11 illustrates the histogram of these results. We see that the local Lipschitz constants in our case are between 0.2 and 1.6, hence of order 1. Table IV summarizes the results of the analytical, empirical and numerical local Lipschitz constants analysis for the AlexNet.

One would naturally ask if the 500 random samples we chose for this analysis are sufficient to infer an accurate estimate of the Lipschitz constant. To address this question we performed two sets of experiments. First we test if the local Lipschitz constant is narrowly distributed over samples in each class and whether the distribution changes for random input signals (i.e. artificial noise input images). Figure 12 depicts the histogram of local Lipschitz constants for images from class “tench” (left plot), and compares it with the histogram of local Lipschitz constants for i.i.d. Gaussian noise images (right plot). We note that the local Lipschitz constants for Gaussian noise are much more concentrated around a significantly smaller mean than for the class “tench”. This implies that the AlexNet behaves differently for different ImageNet samples from the same class. On the other hand the distribution of local Lipschitz constants for images from same class reflects the same range of values as the distribution over all 500 images considered in Figure 11. This experiment gives us confidence that the estimated Lipschitz constant over the 500 ImageNet images is nearly tight.

Fig. 11: The histogram of the local Lipschitz constants for the AlexNet for 500 sample images taken from the ImageNet dataset.
Method Lip const
Analytical estimate: compute Bessel bounds and follow Corollary III.2
Empirical bound: take quotient from pairs of samples
Numerical approximation: compute local Lipschitz constants and take the maximum
TABLE IV: The Lipschitz constant estimation using three methods for the AlexNet.
Fig. 12:

Two histograms of local Lipschitz constants for the AlexNet: the left plot contains the results of 50 samples from the class “tench”; the right plot contains the results from 50 samples from i.i.d. Gaussian distribution of same size (


On the other hand, as observed from Table IV, the Lipschitz constant computed by taking the maximum of the local Lipschitz constant is about 3 orders of magnitude larger than the empirically computed constant. This surprising observation implies that the direction of maximum variation (the principal singular vector) varies significantly from one ImageNet sample to another. Furthermore, the local Lipschitz constant is large only in a small neighborhood around each sample. In order to estimate the largest perturbation that achieves the local Lipschitz bound we performed the following experiment. For input signal , let denote the principal singular vector of norm that corresponds to the largest singular value . By definition, we have

Figure 13 shows how the quotient changes with . Note that the convergence as approaches is very slow. In particular, this experiment confirms that the local Lipschitz constant is achievable, hance the numerical estimates in Table IV are not just numerical artifacts, but actual achievable ratios. On the other hand, Figure 13 shows that the largest relative variation of the output (i.e. the ratio ) is achieved by small perturbations only. In general, given a pair of different image samples from ImageNet, their -distance is much larger than , so they cannot reflect the local oscillation of .

Fig. 13: The ratio for different .

Iv-C3 Adversarial Perturbation Induced by the Local Lipschitz Constants

CNN’s such as the AlexNet and the GoogleNet are shown to be vulnerable to small perturbations [9, 18, 19]. This kind of instability of those deep networks not only leads to difficulties in cross-model generalization, but also causes serious security problems in practice [20, 21]. An adversarial perturbation is a small perturbation of the input signal that changes the classification decision of the CNN. The perturbation can be constructed by solving an optimization problem where the wrong classification is considered as a loss in the objective function, as described in [9]. Various optimization settings can be found in [18, 19] where specific restriction on the perturbation is required.

The local Lipschitz analysis carried out in the previous section characterizes the impact of varying the direction of signals perturbations on the output of the CNN. It can be seen that for the same amount of input perturbation, different directions can be chosen to achieve a better adversarial impact on the network performance. We use this observation to create adversarial perturbations below. We show that a relative change of the order of can lead the network to wrongly characterize the input image.

Since a local Lipschitz constant is associated with a singular vector with which is the direction that varies the most at , we expect this direction gives a perturbation that “fools” the CNN more than other directions. The task is to find the smallest for which and are labeled differently by the CNN. We use the AlexNet and empirically search for . For each sample, we find the smallest that fools the AlexNet. One such example is given in Figure 14. We take 50 samples and find that the optimal ’s have order of magnitude , which is relatively small compared to (we have input with pixel values in , so the relative change is of the order ). Note that this order of is also observed in [18], where the 2-norm of the perturbation is chosen to be 2000. Further, for each sample, we take 1000 random directions , and compare the labels given by the AlexNet for and for a set of different values of . We plot the percentage of directions that fools the AlexNet on average for these samples in Figure 15. Surprisingly, the direction informed by the local Lipschitz constant performs better than most directions, although at for the quotient is much smaller than the Lipschitz constant at . Empirically, this implies that the local Lipschitz constant is still important although it decreases fast outside a small region.

(a) The original image
(b) The perturbation (scaled for visibility)
(c) The perturbed image
Fig. 14: An example of the perturbation along the direction of the singular vector. The left is the original image, the middle is the perturbation which is amplified 1000 times for clear illustration, and the right is the perturbed image. The AlexNet recognizes the original image as “king snake” but the perturbed one as “loggerhead turtle”.
Fig. 15: Average percentage of successful perturbations in 1000 random directions. is the smallest stepsize where the perturbation along the direction informed by the local Lipschitz constants successfully fools the AlexNet.

V Stationary processes

Signals (audio or image) are often modeled as random processes. In our case, there are two ways to model the input signal of a CNN: one is to consider

as a random process (field) with some underlying probability space

with finite second-order moments (see

[1, Chapter 4]); the other is to regard

as a random variable such that

We first present the former model for our framework in Section II. In the following, we use the notation to emphasize the time (space) variable and to emphasize . We are interested in studying stationary signals. Fix a realization for some . is said to be strict-sense-stationary (SSS) (see, for instance, [22], Chapter 16) if all of its finite-order moments are time-invariant (its cumulative distribution does not change with time). The output of a CNN is SSS provided that the input is SSS. This is stated as the following lemma.

Lemma V.1.

Consider a CNN in the framework of Section II in which there is no dilation operation. Let be the induced Lipschitz continuous map as defined in (18). If is an SSS process, then so is .

Remark 1.

In general, if we apply dilations for random processes, the signals are no longer stationary after the merge operations. To see a concrete example, let be a random variable taking values uniformly in . Consider which has i.i.d. distribution over time and is thus SSS. Note that has different distributions at and , and is thus not SSS. Therefore, throughout this section, we assume that there is no dilation operation in our CNN.

Now we state the result that connects the Lipschitz bound derived in Section III with stationary processes.

Theorem V.2.

Consider a CNN in the framework of Section II in which there is no dilation operation. Let and be SSS processes with finite second-order moments. Then


In particular, .

The proof parallels that of Section III, and we present it in Appendix D.

As mentioned above, we can also follow the second way to model the signal as a random variable . In this case, we have a random variable with values in a Banach space (see a detailed discussion of such random processes in [23, 24]). In particular, let be the map induced by the CNN, and be the Lipschitz constant. Denote to be the received random variable. Then by Proposition 1.2 in [24], we have the concentration function . Suppose is Gaussian (see [23], Chapter 2 for the definition in this case) and let . Then similar to the concentration inequality as in Lemma 3.1 in [24], there exists a median for which we have both


and we have


In signal classification tasks, if we view signals in each class as realizations from a common distribution, then we have the same for all signals in this class. If the feature generated by the CNN is concentrated around , and ’s are separated for different classes, then features from different classes will naturally form clusters. Although we do not have exact concentration (), Inequality (27) demonstrates that concentrates in a “thin” shell of radius around provided that we have a small Lipschitz bound . We further promotes making the Lipschitz bound small in designing CNN’s in the next section.

Vi Lipschitz bound in classification

In linear discriminant analysis (LDA) (see, e.g. [25, 26]

), it is desired to maximize the “separation”, or the “discriminant”, which is the variance between classes divided by the variance within each class (see

[26], Eq (1) and the discussion that follows). We use a similar notion in our (nonlinear discriminant) analysis, albeit its nature of nonlinearity. We define the discriminant of two classes and to be


in which is the nonlinear map induced by the CNN, as defined in (18), denotes the nuclear norm, and Cov denotes the covariance matrix.

To see how the Lipschitz bound is associated with the separation , we look at the nature of the variance of the output feature . Suppose we have a Gaussian noise and apply a linear transform , then is also Gaussian with covariance . The nuclear norm of its variance is given by

where denotes the Frobenius norm. Since is linear, its Lipschitz constant is given by and its Lipschitz bound is given by . Note that the Frobenious norm and the operator norm are equivalent norms, since .

Motivated by the linear case, we look into replacing in (28

) with the Lipschitz bound for general CNN’s. We consider a CNN with a Gaussian white noise input

. We assume two classes of signals, and where each class () contains samples from a colored Gaussian noise . We use to denote the Lipschitz bound for the whole system, as illustrated in Figure 16.

Fig. 16: Illustration of the Lipschitz bounds . Suppose is an image filtered by (and a bias ) from a white Gaussian noise . Then the Lipschitz bound for the class considers both processes of and the CNN. This bound is not the same for different classes since it depends not only on the CNN but also on .

We define the Lipschitz discriminant to be


where and are the Lipschitz bounds for Class 1 and Class 2, respectively.

In Figure 1720

, we report the experiments on the discriminative behavior of randomly generated CNN’s. We take two classes (number “3” and “8”) of test images from the well-known MNIST database

[27], and randomly build CNN’s with three or four convolutional layers and record their discriminant according to (28) (plotted on the left-hand-side in each figure) and (29) (plotted on the right-hand-side in each figure). We then train a linear SVM for each network and plot the error rate of classification against the discriminants. The purpose of this experiment is to show that smaller discriminants lead to better classification results. The reason we use SVM’s is to examine the quality of the CNN (feature extractor) given different discriminants, and therefore we choose to train linear SVM’s (which works for two classes) with the same regularization parameter. The numerical implementation is done using MATLAB 2016b. We use MatConvNet [17] for constructing the CNN, and the Machine Learning Toolbox in MATLAB for training the SVM’s.

Fig. 17:

Plots of error rate versus discriminant for a three-layer CNN with randomly (normal distributed) generated weights.

Fig. 18:

Plots of error rate versus discriminant for a three-layer CNN with randomly (uniformly distributed) generated weights.

Fig. 19: Plots of error rate versus discriminant for a four-layer CNN with randomly (normal distributed) generated weights.
Fig. 20: Plots of error rate versus discriminant for a four-layer CNN with randomly (uniformly distributed) generated weights.

As seen from the results, the error rate tends to decrease as the discriminant (28) and the Lipschitz discriminant (29) increase. The trend is clearer when we have more layers. Therefore, either the discriminant or the Lipschitz discriminant is a reasonable penalty term for the training objective function of the CNN. Our analysis in previous chapters can be effectively used to estimate the Lipschitz discriminant for these optimization problems. However, it remains open how to design a training algorithm using the discriminants since the weights appear in both the numerators and denominators in (28) and (29).

Vii Conclusion

In this paper we proposed a general framework for CNN’s. We showed that the Lipschitz bound can be calculated by solving a linear program with the Bessel bounds for each layer. We also demonstrated that the Lipschitz bounds play a significant role in the second order statistical description of CNN’s. Further, we illustrated that the Lipschitz bounds can be used to form a discriminant that works effectively in classification systems. From the numerical experiments in Section IV, we found interesting results that deserve future work. Future works on this topic include mathematically study the distribution of the local Lipschitz constants in a large deviation sense and how the empirical Lipschitz constants depend on the sample distribution. For those questions, the study will not be in a worst-case sense, and thus requires a framework that addresses more randomness.


DZ was partially supported by NSF Grant DMS-1413249. RB was partially supported by NSF Grant DMS-1413249, ARO Grant W911NF-16-1-0008, and LTS Grant H9823031D00560049. The authors thank the anonymous reviewers for their careful reading of the manuscript and constructive suggestions.

Appendix A Proof of Theorem iii.1

We are going to show that the optimal value for the linear program (20) is a Lipschitz bound. In particular, we study as .

For the -th layer, we mark the signals at the input nodes to be and the signals at the output nodes to be . We estimate the Lipschitz bound by comparing the output nodes and input nodes for each layer, and then derive a relation between the outputs and the input at the very first layer. Note that with our notation here, and .

We first look at the case of no merging. Before we study the input-output relation, note that for the dilation operation illustrated in Figure 21, for two outputs from inputs respectively, we have

Fig. 21: The dilation operation. is the input and is the output given as .

Now we look at the illustration in Figure 3. Since the nonlinearity is 1-Lipschitz, and also according to (30), we have