With the increasing interest in deep neural networks, a lot of research work has been focusing lately on their sensitivity to input perturbations [3, 12, 8, 11] . Most of these investigations have highlighted their weakness to handle adversarial attacks and the addressed means to increase their robustness. Adversarial attacks are real threats that could slow down or eventually stop the development and applications of these tools wherever robustness guarantees are needed. If input uncertainty is not adversarial but simply generated by the context and environment of the specific application, deep networks do not show better behavior in terms of robustness. If no immunization mechanism is used, their generalization performance can greatly suffer. Actually, most of the techniques that are used to handle adversarial attacks can be also applied in this context. These techniques can mostly be classified into two categories: robust optimization techniques that consider an adversarial loss
. Most of these investigations have highlighted their weakness to handle adversarial attacks and the addressed means to increase their robustness. Adversarial attacks are real threats that could slow down or eventually stop the development and applications of these tools wherever robustness guarantees are needed. If input uncertainty is not adversarial but simply generated by the context and environment of the specific application, deep networks do not show better behavior in terms of robustness. If no immunization mechanism is used, their generalization performance can greatly suffer. Actually, most of the techniques that are used to handle adversarial attacks can be also applied in this context. These techniques can mostly be classified into two categories: robust optimization techniques that consider an adversarial loss[12, 8] and regularization techniques that penalize noise expansion throughout the network [5, 9, 18, 4, 19, 15].
In this article, we address the second type of techniques where robustness can be achieved by ensuring that the Lipschitz constant of the network remains small. The network can be seen as a mapping between inputs and outputs. Its robustness to input uncertainties can be controlled by how much the mapping output expands the inputs. In the case network with Lipschitz continuous activations, this is equivalent to controlling the Lipschitz constant of the whole network mapping.
Expressiveness is another important property of the neural network. It defines the ability of the network to represente highly complex functions. It is achieved by depth [2, 10]. Of course, such ability is also to be balanced with its generalization power in order to avoid overfitting during training. On one hand, if the weights of the networks are free to grow too high, the generalization power will be low. On the other hand, if the weights are overly restricted, the expressiveness of the network will be low. Usually, this tradeoff is controlled by constructing a loss that accounts for both training error and generalization error, the so-called regularized empirical risk functional. A parallel has been established between robustness and regularization  . Actually in the case of support vector machines, it has been shown that both are equivalent. The work on deep regularized networks we have mentionned above suggests that this is also true for neural networks. The idea we are developing here is a contribution along this line.
. Actually in the case of support vector machines, it has been shown that both are equivalent. The work on deep regularized networks we have mentionned above suggests that this is also true for neural networks. The idea we are developing here is a contribution along this line.
We argue that Lipschitz regularization does not only restrict the weights magnitude but it also implements a coupling mechanism accross layers, allowing some weights to grow high while others are allowed to vanish. This happends accross layers, meaning that the Lipschitz constant can remain small with some large weights values in one layer while some other weight in other layers counterbalance this growth by remaining small. On the contrary, if all weights are restricted uniformly accross the network, even though network depth is large, the expressiveness of the network may be inhibited. Through a numerical experiment on a popular dataset, we show evidence of this phenomenon and draw some conclusions and recommmandations about software implementation of Lipschitz regularization.
2 Neural Network Robustness as a Lipschitz constant regularization problem
2.1 Propagating additive noise through the network
Consider feed-forward fully connected neural networks that we represent as a successive composition of linear weighted combination of functions such that for , where is the input of the -th layer, the function is the -Lipschitz continuous activation function at layer are the weight matrix and bias vector between layer that we want to estimate during training. The network can be seen as the mapping
-Lipschitz continuous activation function at layer, and and
are the weight matrix and bias vector between layerand that define our model parameter
that we want to estimate during training. The network can be seen as the mapping. The training phase of the network can be written as the minimization of the empirical loss where is a measure of discrepancy between the network output and the desired output.
Assume now that the input sample is corrupted by some bounded additive noise such that for some positive constant . We define as the noisy observation of that we obtain after propagating a noisy input through layer . We can write that we can upper bound by since is -Lipschitz continuous. Therefore, the network mapping is -Lipschitz continuous and the propagation of the input noise throughout the whole network, leads to an output noise that satisfies the following:
where denotes the operator norm:
and the quantity is actually an upper bound of .
2.2 Network noise contraction by controlling its Lipschitz constant
In this section, for simplicity, but without loss of generality, we will consider -Lipschitz activation functions, meaning that (this is for example the case of ReLu functions).
(this is for example the case of ReLu functions).
We have just seen that the quantity says how much the input noise will be expanded after propagation through the network. Therefore, if during the training process, we ensure that this quantity remains small, we also ensure that input uncertainties will not be expanded by the successive neurons layers. There are two ways to control this quantity during training:
says how much the input noise will be expanded after propagation through the network. Therefore, if during the training process, we ensure that this quantity remains small, we also ensure that input uncertainties will not be expanded by the successive neurons layers. There are two ways to control this quantity during training:
Constrained optimization: The idea is to solve the following empirical risk minimization problem:
where is positive a parameter. The difficulty with this approach is the nonlinearity of the constraint. One would like to use aprojected stochastic gradient method to solve the training problem. However projecting onto this constraint is a difficult problem. To do so, in [5, 18], the authors have proposed to compute and restrict layer by layer instead of restricting the whole product. Restricting the norm of the weights layer by layer is actually very different from restricting the product of their norms. The layer by layer process isolates the tuning of weights while if the whole product is considered, some layers may be privileged against other. We will see in the next section how this can affect, for some dataset the expressiveness of the network.
Lipschitz regularization: The alternative is to introduce a regularization term in the loss as follows:
where is a positive parameter. There are no projection involved, the regularization acts through the addition of a correction term in the gradient of the loss so as to ensure a low value of the Lipschitz constant. The gradient of the regularized loss, denoted can be written as:
and, as mentioned above, depends on the complete cross-layer structure via . To further emphasize this coupling effect, under the assumption that , we can rewrite as
where denotes the largest eigenvalue of matrix such that the principal axes are aligned with its eigenvectors, the upper bound would only depend on the product layer by layer of the largest weight length along these axes. The upper bound could then be seen as a layer by layer product of weight ”size”. This also means that if at one layer, weights are small, there is room for increase at another layer as long at the whole product remains small. In this sense, we say that the weights have more degrees of freedom than when they are restricted at each layer independently.
denotes the largest eigenvalue of matrix. We see in this last expression that, if we could rotate the matrix at each layer
such that the principal axes are aligned with its eigenvectors, the upper bound
would only depend on the product layer by layer of the largest weight length along these axes. The upper bound could then be seen as a layer by layer product of weight ”size”. This also means that if at one layer, weights are small, there is room for increase at another layer as long at the whole product remains small. In this sense, we say that the weights have more degrees of freedom than when they are restricted at each layer independently.
This is also illustrated on a very simple example in Figure 1(right). We take the very simple case of a one hidden layer neural network with one input, one hidden and one output neuron, meaning that the parameter belongs to . In this case, the bound is equal to . The right figure shows the boundary and the center square defines the set which is the restriction of each layer weight matrix to 1 independently. It clearly shows that this layer by layer restriction is more conservative than the Lipschitz upper bound that allows some to be large when is small or the other way around. This is what we refer to as the coupling mechanism accross layers. Figure 1 only shows a very low dimensional case. If the dimension is large, it easy to understand that, for a specific level of the regularization, the feasible set of the weights for the Lipschitz regularization will be much larger than the feasible set of restricted weights at each layer. We argue and show in the next section that for some specific dataset, this extra feasible volume for the weights enables better expressive power of the network. Note that, computationally, there are several difficulties with this coupling approach:
First, the Lipschitz regularization or more precisely, its upper bound is a non convex function as shown for example on the 2D example of Figure 1(left). Minimizing such a function may be difficult and available training algorithms such as stochastic gradient techniques or its variants may get trapped in a local minimum. This is not specific to this case. Non convex regularization techniques have been proven to be effective in other contexts while facing the same difficulty . However, in practice, the benefit is often confirmed.
The computation of may also be difficult. In practice, it requires the use of numerical differentiation since there is no simple explicit expression of the gradient. In , alternatively, the authors have used a power iteration method to approximate the operator norm. Observe, however, that since the network parameter values must already be stored at any time during the training process, the computation of does not increase storage requirements.
With respect to these numerical difficulties, please also note that we only emphasize the role of the coupling effect of the Lipschitz regularization and we do not claim to provide efficient techniques to handle it especially on large problems. However, we want to point out that the future development of efficient robust neural network algorithms should preserve the cross layer structure of the regularization. The design of methods that isolate layers are of course computationally interesting but will loose some of the property of Lipschitz regularization and may turn out to be over-conservative in terms of robustness, at least on some datasets, as we will see in the next section.
To illustrate the coupling effect discussed above, we consider the deep neural network regression task with the Boston dataset . For this purpose, we use a 3 hidden layers feed-forward fully connected network using ReLu activation functions. The network has 3 hidden layers having each 20 neurons. For training and testing, we use the keras library  under the python  environment. We choose to compare four various training loss formulations and compare, during training, their validation mean average error, their loss values as well as the spectral norm of the weights matrix of each layer. The tested formulations contain all the mean average square loss but include various regularizations or limiting mechanisms on weigths, as follows:
(No reg) no regularization
(Layer reg) spectral norm regularization at each layer (no coupling)
(Lipschitz reg) Lipschitz regularization across layers (with coupling)
(MaxNorm) MaxNorm constraint on weights (max value = 10 on weights at each layer) as described in 
The ADAM optimization algorithm  is used for training. To implement the ”Layer reg” and ”Lipschitz reg”, a custom regularization and a custom loss were created respectively in keras. The training procedure was set to epochs with a batch size of . The regularization parameter was selected by grid search. The training is carried out on a fraction of of the entire dataset without perturbing the input data. However, we validate the various formulations with several levels of test uncertainties to evaluate the robustness of each formulation on the remaining fraction of the data. The noisy test inputs are generated as follows:
where is the test set, is a nominal input set aside before training from the Boston dataset and is an additive uncertainty such that where (the uniform distribution on the interval
(the uniform distribution on the interval), is the noise level and and are the vectors of minimum values and maximum values for each input features. Figure 2 provides the training mean average error and loss profiles during training while Figure 3 gives the mean average validation error for the various noise levels .
|Noise level||no reg||Layer reg||Lipschitz reg||Max Norm|
During training, Figure 2 shows that the Lipschitz regularization achieves better mean absolute error and loss values than the other techniques that achieve all together similar results. One could suspect overfitting of the data during training with Lipschitz regularization but the validation phase on unseen data as shown on Figure 3 actually does not confirm this. The mean absolute validation error achieved by the Lipshitz regularization is better than the other methods. This is confirmed for all levels of uncertainties, meaning that Lipschitz regularization is, for the Boston dataset, the technique that provides the highest level of robustness. More specifically, it is worth noticing that the layer-by-layer Lipschitz regularization as opposed to the Lipschitz regularization accross layer is not performing well here. At the highest noise level , the Lipschitz regularization achieves a low mean absolute validation error twice as fast as the layer-by-layer approach. The MaxNorm approach achieves better results than the non regularized model but is behind the layer-by-layer approach. These results are consistant with the fact that Lipschitz regularization provides good level of accuracy. When looking at Table 1, we can observe some significant differences in the spectral norms of layer weights for the various network instances. All formulations tend to emphasize the first layer except the layer-by-layer approach that allocates more weight mass at the last layer. The Lipschitz regularization accross layers tends also to achieve higher weight values than others, which is natural when considering the shape of the regularization as shown in Figure 1. The value of the network Lipschitz constant are similar except for the MaxNorm case that tends to restrict more the weights at all layers. This example confirms that the Lipschitz regularization (accross layers) provides more freedom to the weights than other techniques for the same value of the network Lipschitz constant. This confirms our idea that, for some datasets, letting the regularization play a coupling mechanism accross layers helps in finding the best compromise between robustness and expressiveness of the network.
In this article, we have discussed some properties of Lipschitz regularization used as a robustification method in deep neural networks. Specifically, we have shown that this regularization does not only control the magnitude of the weights but also their relative impact accross the layers. It acts as a coupling mechanism accross layers that allows some weights to grow when other counterbalance this growth in order to control the expansion of noise throughout the network. Most of Lipschitz regularization implementations we are aware of actually isolate the layers and do not benefit from the coupling mechanism. We believe that this knowledge should be useful in the future and help designing robust neural network training that achieves also good expressiveness properties.
-  (2015) Keras. Note: https://keras.io Cited by: §3.
-  (2016) The power of depth for feedforward neural networks. In COLT, Cited by: §1.
-  (2018) Analysis of classifiers’ robustness to adversarial perturbations. Machine Learning 107 (3), pp. 481–508. Cited by: §1.
-  (2018) Improved robustness to adversarial examples using lipschitz regularization of the loss. arXiv:1810.00953v3. Cited by: §1.
-  (2018) Regularisation of neural networks by enforcing lipschitz continuity. arXiv:1804.04368v2. Cited by: §1, §2.2.
-  (1978) Hedonic prices and the demand for clean air. J. Environ. Economics and Management 5, pp. 81–102. Cited by: §3.
-  (2015) Adam: a method for stochastic optimization. CoRR abs/1412.6980. Cited by: §3.
-  (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML, Cited by: §1.
-  (2018) Lipschitz regularized deep neural networks converge and generalize. arXiv:1808.09540v3. Cited by: §1.
-  (2017) On the expressive power of deep neural networks. In Proceedings of the 34th International Conference on Machine Learning, Vol. 70, pp. 2847–2854. Cited by: §1.
-  (2018) Certified defenses against adversarial examples. CoRR abs/1801.09344. Cited by: §1.
-  (2018) Understanding adversarial training: increasing local stability of supervised models through robust optimization. Neurocomputing 307, pp. 195 – 204. Cited by: §1.
-  (2014) Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15, pp. 1929–1958. Cited by: item -.
-  (2015) Python: a dynamic, open source programming language, python software foundation. Note: URL https://www.python.org/ Cited by: §3.
-  (2018) Lipschitz regularity of deep neural networks: analysis and efficient estimation. Advances in Neural Information Processing Systems. Cited by: §1.
-  (2018) Nonconvex regularization based sparse and low-rank recovery in signal processing, statistics, and machine learning. CoRR abs/1808.05403. Cited by: §2.2.
-  (2009) Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, pp. 1485–1510. Cited by: §1.
Spectral norm regularization for improving the generalizability of deep learning. arXiv:1705.10941v1. Cited by: §1, §2.2, §2.2.
-  (2018) Lipschitz-margin training: scalable certification of perturbation invariance for deep neural networks. arXiv:1802.04034v3. Cited by: §1.