1 Introduction
Today, as developments have progressed in the field of privacyaware machine learning (ML), it has become widely acknowledged that applying ML to sensitive data introduces a number of privacy risks which must be taken into consideration when building responsible ML systems. Multiple works
[shokri2017membership, carlini2018secret, melis2019exploiting, yeom2018privacy] have shown that some information about training data can be recovered from the network’s parameters, especially in the context of membership inference. To be able to use sensitive data such as medical records for training machine learning models, something must be done to ensure the privacy of the data with respect to the trained model. One approach to this is differential privacy (DP) [dwork2014algorithmic], which limits the influence that one specific data point can have on the model parameters. In the case of ML applied to medical records, the utility in applying DP would be to provide a mathematical assurance on the level of potential additional harm that could come to a patient if their records are included in the training cohort rather than excluded.A necessary step in the application of DP to the training of ML models via stochastic gradient descent, as shown in
[abadi2016deep], is the application of gradient clipping. Notably, model gradients must be normbounded individually for each sample, not according to the aggregate across a given minibatch. That is, one should consider the gradient of the loss
induced by example , rather than the gradient of the global loss . This perexample gradient clippinghas also been used in the context of reinforcement learning
[wang2016dueling]. In [abadi2016deep], gradient clipping is defined for a given maximum gradient norm bound by(1) 
where is the gradient of the model parameters with respect to data input . This gradient clipping has the effect of bounding the maximal amount of influence a single sample can have on the direction of a model update. When applied over the course of training, in conjunction with the addition of noise, the sensitivity of the final model to any one sample is therefore bounded thanks to the composition properties of DP [abadi2016deep].
Unfortunately, autodifferentiation libraries, like those included in PyTorch and Tensorflow, do not naturally offer the option to compute perexample gradients as conventional ML training workflows only require a single gradient produced by averaging across the samples of the minibatch. Since a successful implementation of differentiallyprivate model training requires this perexample gradient clipping, efficiently producing perexample gradients is a critical step. One approach would be to start from the bottom and make the necessary changes to existing lowlevel autodifferentiation tools to enable this computation. Another tactic, which we investigate here, is to start from the highlevel framework API (in this case, PyTorch) and see what options are available to effect this perexample gradient estimation.
Multiple methods in this vein have been suggested over the past several years. One such method was introduced by Goodfellow in a technical report [goodfellow2015efficient]. While very efficient, this method only applies to linear fullyconnected (dense) network layers. In this report, we explain how to extend the approach of [goodfellow2015efficient] to convolutional networks, and proceed to compare its efficiency to that of other existing approaches.
2 Existing methods for computing perexample gradients
In this section, we review some of the existing techniques for computing perexample gradients at the level of the of the ML framework API. Algorithmic optimization can accelerate the computation, but the main speedups come from fully utilizing the parallelization of operations on GPU. The three main approaches to perexample gradient computation are described as follows:
Naive approach.
The simplest strategy consists in changing the batch size to . Doing so, one can iterate over the originally desired batch size, computing the gradient corresponding to each sample in the batch. Since there is no parallelization, this method is very slow. This approach requires a minimal amount of code. We denote it by naive in our experiments.
Changing one step of the backpropagation.
In [goodfellow2015efficient]
, it was shown that one can use the autodifferentiation’s intermediate results to compute the perexample gradient by hand. However, this formulation does not allow for convolutions to be used efficiently. In practice, when attempting to apply DP to training Deep Convolutional Neural Networks (DCNN) models, multiple papers have used a transfer learning approach, training only the last linear task layers while freezing a set of pretrained CNN feature extraction layers
[abadi2016deep, alain2015variance, bhowmick2018protection]. We explain in the next section how to extend this computation to CNNs using functions available in PyTorch. We denote it by crb in our experiments, for chain rule based.Using multiple copies of the model.
This strategy relies on the same idea as the first one: using minibatches of size 1. In this approach, however, one copies the model as many times as there are samples in the batch. Each model is used on only one example, thus parallelizing the iteration from the first approach. This approach is very fast and requires only a small amount of code. In addition to this, the model duplicates can share their parameters with the original model in order to minimize the training memory footprint. This method has been proposed by Goodfellow in 2017, on a Github thread^{2}^{2}2See discussion at https://github.com/tensorflow/tensorflow/issues/4897#issuecomment290997283.. We denote it by multi in our experiments.
3 Perexample backpropagation for convolutional networks
In this section, we describe a way to compute perexample gradients from partial derivatives explicitly computed by autodifferentiation tools. We only describe the computation of these gradients for a single linear layer, but the extension to multiple nonlinear layers is straightforward. Doing so for each layer in the network results in full perexample gradients. This method is denoted crb in our experiments, which stands for chain rule based. We provide an implementation of this method at https://github.com/owkin/gradcnns.
3.1 Fullyconnected linear layers
In the case of dense linear layers, one can refer directly to the technique introduced in [goodfellow2015efficient], which we reproduce, here. Let be a linear layer of a neural network, with input and output . When training a model, we wish to update the model parameters
so as to minimize some outputdepending loss function
, i.e. following the negative gradient of w.r.t.. This gradient is obtained from the model output via backpropagation,
(2) 
The trick of calculating perexample gradients post autodifferentiation comes from the observation of the final equality above: the partial w.r.t. a single coefficient of comes from the simple multiplication of a single layer input with the partial of the loss with respect to a single output. As noted by [goodfellow2015efficient], one can store the layer’s input , and autodifferentiation tools give access to for each sample in the batch. We can then compute the gradient with simple batch matrix multiplications, an outerproduct computation, which can be done efficiently on GPU, .
3.2 Application to convolutional layers
The computation described above uses linear matrix representation, which if applied to the case of convolution would be extremely inefficient. We describe in this section how to use convolution operations to obtain the same result when backpropagating through convolutional layers. We will do so using PyTorch format for tensors dimensions, i.e.
(batch, channels, spatial dimensions).3.2.1 Notation
Spatial dimensions.
In order not to overcomplicate the derivation, we will work with 1D convolutions. We will refer to this spatial dimension with the variable , or in the case of the convolutional kernel, with the letter . Note that the same derivations are correct for other dimensions.
Channels.
Channels will be an essential part of our derivation, especially when
dealing with the groups
argument. We will refer to the input and
output channels respectively with the variables and
.
Batch.
We will refer to the batch dimension with the variable .
3.2.2 Standard convolution
We start with standard convolution as implemented in most ML frameworks; a set of filters, or kernels, of shape is convolved with the input tensor of shape to form an output tensor of shape . This discrete convolution is given by the formula^{3}^{3}3Note that our definition of convolution uses an offset rather than . This is both for mathematical simplicity and for consistency with PyTorch, which uses this convention as well.
(3) 
where we note the aggregation step performed over the input channels to produce the final convolved output of the layer. To calculate perexample gradients with respect to , we are interested in differentiating sample ’s contribution to the loss , that is, where . By applying the same technique as in the fullyconnected case, but carrying through the convolution operation, we see that the gradient with respect to the convolution kernel is
Since the spatial dimensions of and are and respectively, their convolution has the same spatial dimension as the convolution kernel. This convolution can be seen as a perexample convolution, which we denote by :
(4) 
This convolution operation, although similar to the one in (3), is not available in autodifferentiation libraries. However, the groups argument in PyTorch’s regular convolution allows one to evaluate such an operation. Recall that groups splits the input tensor into groups of the same size, performs independent convolutions on each group, and finally concatenates the resulting outputs. More precisely, one can reshape the input to size , and the kernel to and subsequently define the group convolution as
(5) 
In Algorithm 1, we explain how to use the group convolution (5) in order to evaluate (4). We first replace by in the formula above, and swap and , also making sure that is set to . Axes are then swapped and/or combined in such a way that (4) is recovered. Namely, is reshaped to , so that the effective batch size is 1 and the number of input channels is ; and is reshaped to , so that the number of output channels is . The key is in noting that, since each sample is treated as a different input channel, grouping allows us to treat each of them in parallel. Finally, we note that after reshaping the inputs to the convolution, it is necessary, even though we operate on a 1D spatial dimension, to utilize a 2D convolution. Analogously, when adapting this procedure to the estimation of perexample gradients for 2D convolutional layers, it is necessary to use a 3D convolution.
3.2.3 More options for convolutional layers
In the previous section we described how to compute perexample gradients for the case of a simple convolution. However, often one wants to control the stride and dilation of a convolutional layer, as well padding and grouping. We now describe how to integrate into the algorithm all arguments for convolutions available in PyTorch. The resulting algorithm is described in Algorithm 2.
Stride and Dilation.
First, let us recall the definition of convolutions with stride or dilation arguments.

Convolution with stride . The output has dimension ,

Convolution with dilation . The output has dimension ,
Note the only difference in the definition of the two operations is only on to which time index the scaling multiplier is applied. Because the roles of and are essentially switched in (4), it suffices to switch those arguments in the convolution. That is, if the convolutional layer uses stride and dilation , the convolution in (4) should have stride and dilation .
Finally, because of the floor operation involved in the output’s size with strided convolution, the output of (4) may have more output dimensions than the convolution kernel itself. If this is the case, the extra values should be ignored, e.g.
using Python slicing notation.
Padding and Groups.
The padding
and groups
arguments from the convolutional layer
can be reused in (4). For PyTorch, this means using the
options

padding = layer.padding,

groups = batch_size * layer.groups,
when defining the convolution. Here, the layer variable refers to the layer on which we are currently attempting to calculate the perexample gradients. Note that the value of groups needs to be modified with the input batch size, after the reshaping of .
4 Benchmarks
Runtime in seconds for processing 20 batches of 8 examples, in CNNs of increasing size. The channel rate is defined as the ratio between the number of channels from a layer to the previous, considering the first layer has 25 channels. Convolutions are performed with a kernel of size 3; ReLU activations are added after each convolution, and a maxpooling layer is added after every 2 convolutional layers. Inputs are randomly generated and have size
. Left, center and right show results for 2, 3 and 4 layers respectively. Each point is the average over 10 runs.As explained above, the multi approach consists in vectorizing the naive approach by creating multiple copies of the same model and backpropagating through all batch samples in parallel. One could criticize this approach for using a very large amount of memory, as many copies need to be created. However, it turns out this can be done without a single copy of the original model: this can be done using only pointers to the original parameters, which uses as much memory as other methods.
That being said, it is not clear which strategy will perform better in terms of runtime. Due to how computations are optimized in the GPU, analyzing the computational complexity of each approach is not necessarily useful. We will thus run a series of experiments in order to perform this comparison empirically.
All the experiments in this section have been performed on n1standard8 instance on GCP, with a Nvidia P100 GPU. The image used is based on Ubuntu 18.04, and had Python 3.5.3 and PyTorch 1.1 installed.
4.1 Toy networks
In a first experiment, we create convolutional architectures with 2, 3 and 4 sequential convolutional layers such that the number of channels from a layer to the next increases according to a given ratio. In Fig. 1, we show that, for shallower networks, crb runs faster than multi as we increase this ratio. Increasing the number of layers, however, seems to be an advantage for multi. Finally, in a network with 4 layers, the two methods are competitive.
Depth and number of channels are not the only quantities that affect runtime. In another experiment, we study how the runtime changes with batch size. For larger batches, crb seems to be the method of choice. As shown in Fig. 2, both naive and multi lead to a runtime which is linear over batch size; crb, however, seems to be piecewise linear: as the batch size increases, the slope decreases. This behaviour is due, presumably, to the way crb is able to exploit the GPU, transforming the original computation into a series of new convolutions of different complexity.
Other factors can come into play, and it is not always intuitive to understand which ones and why. For instance, increasing the kernel size of the convolutions seems to be an advantage for crb, see Fig. 3.
4.2 Realistic networks
At this point, one might wonder which of these approaches are better suited for calculating perexample gradients for practical DCNNs. Such DCNNs typically contain many more than 4 layers, as well as widely varying channel rates. To answer this question, we ran the same experiment for two popular DCNN architectures, AlexNet and VGG16. Runtime results are presented in Table 1.
Model  Batch Size  No DP (sec)  Naive (sec)  crb (sec)  multi (sec) 

AlexNet  16  
VGG16  8 
For relatively small networks such as AlexNet, crb performs up to fifteen times times faster than naive on a Nvidia P100 GPU. It is also slightly faster than multi. However, when looking at the larger VGG16, crb becomes slightly slower than multi. One could thus hypothesize that multi is the best option for larger networks; as noted before, however, it is not obvious whether width and depth are the only relevant quantities in play. Batch size, as well as the kernel size of the convolution, seem to be of relevance as well.
Notice that we have not used batch normalization layers in any of the networks, as they mix different examples on the batch and thus make pergradient computations impossible. For this same reason we have not tested CNNs which naturally include batch normalization layers, such as the ResNet. An alternative is to use instance normalization in cases when perexample gradient clipping is necessary.
Our experiments show that both multi and crb methods have configurations in which they are the most efficient. Both methods have their merits while using similar amount of GPU memory.
5 Conclusion
Both the existing multi approach and our extended crb are capable of fully utilizing GPU capabilities to efficiently compute perexample gradients. We have shown, empirically, that each is faster in a particular region of the parameter space of DCNN architectures. In general, it is unclear which method will be more efficient.
Our approach is more complicated to be put in practice: it requires one to adapt backpropagation hooks, as opposed to multi, for which multiple copies of the model can be created on a higher level. Notice also that our approach uses PyTorch’s peculiarities—namely the group argument in the convolutional layer—and should be adapted to different deep learning frameworks. Our hope is that the findings we present in this work will be useful in furthering the development and improvement of ML components necessary for privacyaware machine learning.
We have implemented a PyTorch version extending torch.nn, which is available at https://github.com/owkin/gradcnns.
Comments
There are no comments yet.