Differentially Private Model Publishing for Deep Learning

04/03/2019 ∙ by Lei Yu, et al. ∙ Georgia Institute of Technology 0

Deep learning techniques based on neural networks have shown significant success in a wide range of AI tasks. Large-scale training datasets are one of the critical factors for their success. However, when the training datasets are crowdsourced from individuals and contain sensitive information, the model parameters may encode private information and bear the risks of privacy leakage. The recent growing trend of the sharing and publishing of pre-trained models further aggravates such privacy risks. To tackle this problem, we propose a differentially private approach for training neural networks. Our approach includes several new techniques for optimizing both privacy loss and model accuracy. We employ a generalization of differential privacy called concentrated differential privacy(CDP), with both a formal and refined privacy loss analysis on two different data batching methods. We implement a dynamic privacy budget allocator over the course of training to improve model accuracy. Extensive experiments demonstrate that our approach effectively improves privacy loss accounting, training efficiency and model quality under a given privacy budget.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

In recent years, deep learning techniques based on artificial neural networks have dramatically advanced the state of the art in a wide range of AI tasks such as speech recognition, image classification, natural language processing and game playing. Its success relies on three sources of advancement: high-performance computing, large-scale datasets, and the increasing number of open source deep learning frameworks, such as TensorFlow, Caffe, and Torch.

Privacy Concerns in Deep Learning.  However, recent studies on membership attacks and model inversion attacks have exposed potential privacy risks from a number of dimensions. First, large-scale datasets are collected from individuals via crowdsourcing platforms, containing private information such as location, images, medical, and financial data of the users. The users usually do not have any control over how their data is being used or shared once collected. Second, deep neural networks have a large number of hidden layers, leading to a large effective capacity that could be sufficient for encoding the details of some individual’s data into model parameters or even memorizing the entire dataset [41]. It has been shown that individual information can be effectively extracted from neural networks [19, 34]. Therefore, there are severe privacy concerns accompanied with the broad deployment of deep learning applications and deep learning as a service .

On the other hand, the publishing and sharing of trained deep learning models has been gaining growing interest. Google’s cloud machine learning services provide several pre-trained models usable out-of-the-box through a set of APIs. The model owners can also publish their trained models to the cloud and allow other users to perform predictions through APIs. In mobile applications, entire models are stored on-device to enable power-efficient and low-latency inference. Transfer learning 

[40], a key technique of deep learning, can leverage and adapt the already existing models to new classes of data, saving the effort of training the entire neural network from scratch. People who only have small datasets can use the model trained on a large dataset as a fixed feature extractor in their neural networks or adapt the model to their own domain. Transfer learning is believed to be the next driver of machine learning success in industry and will significantly stimulate the sharing of pre-trained models. A large number of pre-trained models have been publicly available in model zoo repositories [6]. In these cases, the model parameters are entirely exposed, making it easier for adversaries to launch inference attacks, such as membership attacks [34] or model inversion attacks [19], to infer sensitive data records of individuals in the training datasets. Even by providing only the query APIs to access remote trained models, the model parameters may still be extracted from prediction queries and in turn used to infer the sensitive training data [38]. Therefore, it is imperative to develop principled privacy-preserving deep learning techniques to protect private training data against adversaries with full knowledge of model parameters.

Deep learning with Differential Privacy.  Although privacy-preserving machine learning has attracted much attention over the last decade, privacy preserving deep learning was first proposed in 2015 [33]. The proposed approach argues for privacy-preserving model training in a collaborative federated learning system and involves multiple participants jointly training a model by sharing sanitized parameters while keeping their training data private and local. The first proposal for deep learning with differential privacy was presented in 2016 [7]. Differential privacy (DP), a defacto standard for privacy that offers provable privacy guarantees, has been applied for privacy-preserving machine learning [10, 42, 32, 11, 8]. DP characterizes the difference in output between two input datasets differing by at most one element. This characterization is challenging with deep learning because the internal representations of deep neural networks are notoriously difficult to understand. Prior works [7, 28, 37]

suggest using the norm gradient clipping in the stochastic gradient descent (SGD) algorithm to bound the influence of any single example on the gradients and then applying differentially private mechanisms to perturb the gradients accordingly. By ensuring that each gradient descent step is differentially private, the final output model satisfies a certain level of differential privacy given the composition property. It is known that the SGD training process of a deep neural network tends to involve a large number of iterations. Given a target differential privacy guarantee, the differentially private training algorithm needs a tight estimation on the privacy loss for the composition of DP. This is necessary for the algorithm to effectively track cumulative privacy loss during the training process and, if necessary, terminate before the loss exceeds the privacy budget. Unfortunately, Abadi and his co-authors 

[7] have shown the existing strong composition theorem [18]

for differential privacy does not yield a tight analysis. To address this problem, the moments accountant method is proposed 

[7], which tracks the log moments of the privacy loss variable and provides a much tighter estimate of the privacy loss for the composition of Gaussian mechanisms under random sampling.

In this paper, however, we analyze several issues with using the differentially private SGD (DP-SGD) algorithm and privacy accounting method proposed in [7]. The first problem is related to the underestimation of privacy loss caused by data batching methods. For computational efficiency, the SGD algorithm usually takes small batches from the training dataset each to iteratively compute gradients and update model parameters. The DP-SGD approach in [7] exploits the privacy amplification of random sampling to produce a tighter estimation of privacy loss. This is based on the assumption that the data batches for mini-batch SGD input are generated through random sampling with replacement on the training dataset. In practice, for better efficiency, the data batching method is implemented through random reshuffling, which randomly shuffles the training dataset and then partitions them into batches of the similar size [5, 22]. We note that the random sampling and the random reshuffling are two different implementation methods for data batching, and our analysis and experiments show that they cause distinct privacy loss. Therefore, the composition of differentially private mechanisms depends on how data is accessed by each mechanism. Simply treating reshuffling and random sampling as the same data access will lead to the underestimation of privacy loss.

The second issue is the need for a tight analysis on cumulative privacy loss for DP-SGD which tends to have a large number of iterations. To address this problem, we propose the use of concentrated differential privacy (CDP), a generalization of differential privacy recently introduced by Dwork and Rothblum [17]. CDP focuses on cumulative privacy loss for a large number of computations and provides a sharper analysis tool. Based on CDP, we analyze the privacy loss under different data batching methods and develop privacy accounting methods for each respectively. In particular, our analysis for random reshuffling provides a tighter estimation on privacy loss than the strong composition theorem even when exploiting the privacy amplification effect of random sampling. For the random sampling based batching method, we show that CDP is not able to capture the privacy amplification effect of random sampling. We address this problem by using a relaxation and conversion to traditional -differential privacy. Compared with the moments accountant method which requires numerical computation of log moments for a range of moment orders, our method can provide a slightly loose but quick estimation on privacy loss under random sampling with a simpler calculation.

The third novelty of our approach to differentially private deep learning is our development of a dynamic privacy budget allocation to improve model accuracy under differentially private training. The perturbed gradients during the training process inevitably degrade model accuracy. We aim to provide differential privacy guarantees on the final output models. This means that we are much less concerned with the privacy loss of a single iteration. This provides opportunities to optimize the model accuracy via adjusting privacy budget allocation for every training iteration. In this paper, we propose a set of dynamic privacy budget allocation methods and our extensive experiments demonstrate benefits for improving the model accuracy. It is worth noting that the techniques proposed in this paper not only apply to neural networks but also may apply to any other iterative learning algorithms.

The remainder of the paper is as follows. We review necessary background in Section II, provide an overview of our approach in Section III, and then give a detailed technical development in Section IV. Section V describes experimental results. Some discussions are given in Section VI. Section VII presents related work and Section VIII concludes the paper. Deferred proofs are provided in the Appendix.

Ii Backgroud

Ii-a Deep Learning

Deep learning uses neural networks that are defined as a hierarchical composition of parameterized functions to model the input data. For supervised learning, the training data are labeled with correct classes, and a multi-layer neural network is deployed to model the correlation between data instances and their labels. A typical neural network consists of

layers of neurons. Each layer of neurons is parameterized by a weight matrix

and a bias vector

. Layers apply an affine transformation to the previous layer’s output and then computes an activation function

over that. Typical examples of the activation function

are sigmoid, rectified linear unit(ReLU) and tanh.

The training of a neural network aims to learn the parameters

that minimize a loss function

defined to represent the penalty for misclassifying the training data. It is usually a non-convex optimization problem and solved by gradient descent. The gradient descent method iteratively computes the gradient of the loss function and updates the parameters every step until the loss converges to a local optimum. In practice, the training of neural networks uses the mini-batch stochastic gradient descent (SGD) algorithm, which is much more efficient for large datasets. At each step a batch of examples is sampled from the training dataset and the gradient of the average loss is computed, i.e., as . The SGD algorithm then applies the following update rule for parameters

(1)

where is the learning rate. The running time of the mini-batch SGD algorithm is usually expressed as the number of epochs

. Each epoch consists of all of the batches of the training dataset, i.e., in an epoch every example has been seen once. Within an epoch, the pass of one batch of examples for updating the model parameters is called one

iteration.

Ii-B Differential Privacy

Differential privacy is a rigorous mathematical framework that formally defines the privacy properties of data analysis algorithms. Informally it requires that any changes to a single data point in the training dataset can only cause statistically insignificant changes to the algorithm’s output.

Definition 1 (Differential Privacy [14]).

A randomized mechanism provides -differential privacy if for any two neighboring database and that differ in only a single entry, ,

(2)

If , is said to be -differential privacy. In the rest of this paper, we write -DP for short.

The standard approach to achieving differential privacy is the sensitivity method [15, 14] that adds to the output some noise that is proportional to the sensitivity of the query function. The sensitivity measures the maximum change of the output due to the change of a single database entry.

Definition 2 (Sensitivity [15]).

The sensitivity of a query function is

(3)

where , are any two neighboring datasets that differ at most one element, denotes or norm.

In this paper, we choose the Gaussian mechanism that uses

norm sensitivity. It adds zero-mean Gaussian noise with variance

in each coordinate of the output , as

(4)

It satisfies (,)-DP if and  [16].

Ii-C Concentrated Differential Privacy

Concentrated differential privacy (CDP) is a generalization of differential privacy recently introduced by Dwork and Rothblum [17]

. It aims to make privacy-preserving algorithms more practical for large numbers of computations than traditional DP while still providing strong privacy guarantees. It allows the computation to have much less concern about single-query loss but high probability bounds for the cumulative loss, and provides sharper and more accurate analysis on the cumulative loss for multiple computations compared to the popular (

, )-DP.

CDP considers privacy loss on an outcome

as a random variable when the randomized mechanism

operates on two adjacent database and :

(5)

The -DP guarantee ensures that the privacy loss variable is bounded by but exceeds that with probability no more than . As a relaxation to that, -concentrated differential privacy, -CDP for short [17], ensures that the mean (i.e., expectation) of the privacy loss is no more than and the probability of the loss exceeding its mean by an amount of is bounded by . An alternative formulation of CDP to Dwork and Rothblum’s -CDP is proposed by Bun and Steinke [9], called “zero-concentrated differential privacy” (zCDP for short). Instead of mean concentrated as -CDP, zCDP makes privacy loss concentrated around zero (hence the name), still following sub-Gaussian such that larger deviations from zero become increasingly unlikely.

Definition 3 (Zero-Concentrated Differential Privacy (zCDP)[9]).

A randomized mechanism is -zero concentrated differentially private (i.e., -zCDP) if for any two neighboring databases and that differ in only a single entry and all ,

(6)

Where called -Rényi divergence between the distributions of and .

The -DP bounds the privacy loss by ensuring

. In contrast, zCDP entails a bound on the moment generating function of privacy loss

, indicated by an equivalent form of (6)

(7)

This implies that for zCDP, privacy loss is assumed to be a sub-Gaussian random variable such that it has a strong tail decay property, namely, for all  [9]. The following propositions are restatements of some zCDP results given in [9] that will be used in our paper.

Proposition 1.

If provides -zCDP, then provides -DP for any .

Proposition 2.

The Gaussian mechanism with noise satisfies ()-zCDP.

We use zCDP instead of original -CDP because zCDP is comparable to -DP, as indicated by Proposition 1 and is immune to post-processing while -CDP is not closed under post-processing [9].

Ii-D Composition

Differential privacy offers elegant composition properties that enable more complex algorithms and data analysis task via the composition of multiple differentially private building blocks. The composition should have privacy guarantees degraded gracefully with multiple outputs that may be subjected to the joint analysis from building blocks.

For a sequential composition of mechanisms satisfying -DP for =1,, respectively, the basic composition result [16] shows that the privacy composes linearly, i.e., the sequential composition satisfies -DP. When and , the strong composition bound from [18] states that the composition satisfies (, )-DP. For zCDP, it has a simple linear composition property [9]:

Theorem 1.

Two randomized mechanisms and satisfy -zCDP and -zCDP respectively, their sequential composition satisfies ()-zCDP.

Compared with -DP, CDP provides a tighter bound on the cumulative privacy loss under composition, which makes it more suitable for algorithms running a large number of iterations. In other words, while providing the same privacy guarantee, CDP allows lower noise scale and thus better accuracy. Consider iterative composition of a Gaussian mechanism with noise . To guarantee the final (, )-DP, in terms of -DP for every iteration, the permitted loss of each iteration is , which will be very low when is large. The noise scale is . Suppose , we then have . In contrast, with using -zCDP for every iteration, because zCDP satisfies (, )-DP [9] and . It is easy to show that the noise scale which is multiple times smaller than the noise scale derived under -DP. On the other hand, a single parameter of zCDP and its linear composition naturally fit the concept of a privacy budget. Thus, zCDP is an appropriate choice for privacy accounting.

Iii Overview

Because it is difficult to characterize the maximum difference of the model parameters over any two neighboring datasets for neural networks, differentially private deep learning [7, 28, 37] relies on differentially private stochastic gradient descent (DP-SGD) to control the influence of training data on the model. This approach explicitly bounds per-example gradients in every iteration by clipping the norm of gradient vectors. Given a clipping threshold , this is done by replacing the gradient vector with which scales down to norm if . A Gaussian mechanism with norm sensitivity of is then applied to perturb the gradients before the gradient descent step in Eq. (1) updates the model parameters. Because each SGD step is differentially private, by the composition property of differential privacy, the final model parameters are also differentially private. The problem with DP-SGD is that the training of a deep neural network (DNN) tends to have a large number of iterations, which causes large cumulative privacy loss at the end. Therefore, a tight estimation of privacy loss under composition is critical for allowing lower noise scale or more training iterations (for desired accuracy) when we have a fixed privacy budget.

To analyze the cumulative privacy loss of DP-SGD, we employ concentrated differential privacy(CDP) which was developed to accommodate a larger number of computations and provides sharper and tighter analysis of privacy loss than the strong composition theorem of -DP. One way to track the privacy loss of DP-SGD is the Moments Accountant (MA) method proposed by Abadi et al. [7]. It assumes that the data batches for mini-batch SGD are generated by randomly sampling examples from the training dataset with replacement, MA takes advantage of the privacy amplification effect of random sampling to achieve a much tighter estimate on privacy loss than the strong composition theorem. It has been shown in [27] that running an differentially private mechanism over a set of examples each of which is independently sampled with probability () achieves -DP. However, in practice, random batches are generated by randomly shuffling examples and then partitioning them into batches for computation efficiency, which is distinct from random sampling with replacement. By analyzing the privacy loss under these two data batching methods, random sampling with replacement and random reshuffling respectively, we show that 1) random sampling with replacement and random reshuffling result in different privacy loss; and 2) privacy accounting using the MA method underestimates the actual privacy loss of their neural network training, because it simply regards random reshuffling as random sampling with replacement. To address these problems, we develop different privacy accounting methods for each of the batching methods, and our algorithm makes proper choices depending on which method is used for data batching. For privacy accounting under random sampling based batching, we show that CDP is unable to capture the privacy amplification effect of random sampling. To address that, we propose a relaxation of zCDP and convert it to -DP. Compared with MA, our method uses explicit expressions to compute the privacy loss, although producing slightly loose estimation but being easier to compute when the noise scale of the Gaussian mechanism dynamically changes along the training.

In our approach, dynamic privacy budget allocation is applied to DP-SGD to improve the model accuracy. For model publishing, the privacy loss of each learning step is not our primary concern. This allows us to allocate different privacy budgets to different training epochs as long as we maintain the same overall privacy guarantee. Our dynamic budget allocation approach is in contrast to the previous work [7], which employs a uniform privacy budget allocation, and uses the same noise scale in each step of the whole training process. Our dynamic privacy budget allocation approach leverages several different ways to adjust the noise scale. Our experimental results demonstrate that this approach achieves better model accuracy while retaining the same privacy guarantee.

Algorithm 1 presents our DP-SGD algorithm. In each iteration, a batch of examples is sampled from the training dataset and the algorithm computes the gradient of the loss on the examples in the batch and uses the average in the gradient descent step. The gradient clipping bounds per-example gradients by norm clipping with a threshold . The Gaussian mechanism adds random noise to to perturb the gradients in every iteration. We have a total privacy budget and cumulative privacy cost . The way to update depends on which batching method is used. If the privacy cost exceeds the total budget, then the training is terminated. In the pseudo code, the function is used to obtain the noise scale for the current training step , according to that decides how the noise scale is adjusted during the training time.

Input: Training examples , learning rate , group size , gradient norm bound , total privacy budget
1 Initialize ;
2 Initialize cumulative privacy loss ;
3 for  do
4        Dynamic privacy budget allocation:
5        ;
6        update according to data batching method, and ;
7        If , break ;
8        data batching:
9        Take a batch of data samples from the training dataset;
10        ;
11        Compute gradient:
12        For each , ;
13        Clip gradient:
14        ;
15        Add noise:
16        ;
17        Descent:
18        ;
19       
20Output ;
Algorithm 1 Differentially Private SGD Algorithm

Iv Details of Our Approach

In this section, we present the details of our approach for differentially private deep learning. We first propose our dynamic privacy budget allocation techniques and then develop privacy accounting methods based on zCDP for different data batching methods.

Iv-a Dynamic Privacy Budget Allocation

In Algorithm 1, the privacy budget allocated to an epoch decides the noise scale of Gaussian mechanism used by each iteration within that epoch. For a given privacy budget , the final model accuracy depends on how the privacy budget is distributed over the training epochs. Our approach aims to optimize the budget allocation over the training process to obtain a differentially private DNN model with better accuracy.

Concretely, our dynamic privacy budget allocation follows the idea that, as the model accuracy converges, it is expected to have less noise on the gradients, which allows the learning process to get closer to the local optimal spot and achieve better accuracy. A similar strategy has been applied to the learning rate of DNNs in common practice. It is often recommended to reduce the learning rate as the training progresses, instead of using a constant learning rate throughout all epochs, to achieve better accuracy [3, 12]. Therefore, we propose a set of methods for privacy budget allocation, which effectively improve the model accuracy by dynamically reducing the noise scale over the training time (as demonstrated by our experiments).

Iv-A1 Adaptive schedule based on public validation dataset

One approach for adjusting the noise scale is to monitor the validation error during training and reduce the noise scale whenever the validation error stops improving. We propose an adaptive privacy budget allocation that dynamically reduces the noise scale according to the validation accuracy. Every time when the validation accuracy improves by less than a threshold , the noise scale is reduced by a factor of until the total privacy budget runs out. However, when the validation dataset is sampled from the private training dataset, the schedule has some dependency on the private dataset, which adds to the privacy cost. In this case, if we can leverage a small publicly available dataset from the same distribution and use it as our validation dataset, then it will not incur any additional privacy loss.

In our approach, with a public validation dataset, the validation accuracy is checked periodically during the training process to determine if the noise scale needs to be reduced for subsequent epochs. The epochs over which the validation is performed are referred to as validation epochs. Let be the noise scale for the DP-SGD training in the validation epoch , and be the corresponding validation accuracy . The noise scale for the subsequent epochs is adjusted based on the accuracy difference between current epoch and the previous validation epoch . Initially, ,

(8)
(9)

The updated noise scale is then applied to the training until the next validation epoch . The above equations amount to say that if the improvement on the validation accuracy is less than the threshold , it triggers the decay of noise scale where (

) is decay rate, a hyperparameter for the schedule. We note that the validation accuracy may not increase monotonically as the training progresses, and its fluctuations may cause unnecessary reduction of noise scale and thus waste on the privacy budget. This motivates us to use the moving average of validation accuracy to improve the effectiveness of validation-based noise scale adjustment: at validation epoch

, we define an averaged validation accuracy over the previous validation epochs from , including itself, as follows:

(10)

The schedule checks the averaged validation accuracy every () number of validation epochs and compares the current result with that of the last checking time to decide if the noise scale needs to be reduced according to Eq. (8).

Iv-A2 Pre-defined schedules

When the public validation dataset is not available, we propose to use an alternative approach that pre-defines how the noise scale decreases over time without accessing any datasets or checking the model accuracy. Concretely, in our approach, the noise scale is reduced over time according to some decay functions. The decay functions update the noise scale by epoch, while the noise scale keeps the same for every iteration within an epoch. By leveraging the Gaussian mechanism that adds noise from , we provide four instances of decay functions, all of which take the epoch number as an argument.

a) Time-Based Decay: It is defined with the mathematical form

where is the initial noise parameter for , is the epoch number and () is decay rate. when , it is known as “search-then-converge” [13], which decreases the noise scale linearly during the SGD search phase when is less than the “search time” , and decreases the noise by when is greater than .

b) Exponential Decay: It has the mathematical form

where is decay rate.

c) Step Decay: Step decay reduces the learning rate by some factor every few epochs. The mathematical form is

where () is the decay factor and decides how often to reduce noise in terms of the number of epochs.

d) Polynomial Decay: It has the mathematical form

where is the decay power and . A polynomial decay function is applied to the initial within the given number of epochs defined by to reach (). When , this is a linear decay function.

The schedules for noise decay are not limited to the above four instances. In this paper we choose to use these four because they are simple, representative, and also used by learning rate decay in the performance tuning of DNNs. Users can apply them in various ways. For example, a user can use them in the middle of training phase, but keep constant noise scale before and after. Note that in the time-based decay and exponential decay, can be replaced by as done in the step decay such that the decaying is applied every number of epochs. The polynomial decay requires a specific end noise scale after epochs, and we make the noise scale constant after the training time exceed epochs.

Iv-A3 Privacy Preserving Parameter Selection

The proposed schedules require a set of pre-defined hyperparameters, such as decay rate and period. Their values decide the training time and affect the final model accuracy. It is expected to find the optimal hyperparameters for the schedules to produce the most accurate model. A straightforward approach is to test a list of candidates by training neural networks respectively and trivially choose the one that achieves the highest accuracy, though this adds the privacy cost up to . A better approach is to apply differentially private parameter tuning, such as the mechanism proposed by Gupta et al. [11]. The idea is to partition the dataset to equal portions, train models with using schedules on different data portions respectively, and evaluate the number of incorrect predictions for each model, denoted by (), on the remaining data portion. Then, the Exponential Mechanism [29] is applied, which selects and outputs a candidate with the probability proportional to . This parameter tuning procedure satisfies -DP, and accordingly satisfies -zCDP [9].

Iv-B Refined Privacy Accountant

The composition property of zCDP allows us to easily compute cumulative privacy loss for the iterative SGD training algorithm. Suppose that each iteration satisfies -zCDP and the training runs iterations, then the whole training process satisfies ()-zCDP. In this section we show that 1) the composition can be further refined by considering the property of the mini-batch SGD algorithm, and 2) more importantly, different batching methods lead to different privacy loss. In particular, we analyze the privacy loss composition under two common batching methods: random sampling with replacement and random reshuffling. With random reshuffling, the training dataset is randomly shuffled and then partitioned into batches of similar size and SGD sequentially processes one batch at a time. It is a random sampling process without replacement. For random sampling with replacement, each example in a batch is independently sampled from the training dataset with replacement. Because these two data batching methods have different privacy guarantees, for tracking privacy loss correctly, it is important for the users to choose the right accounting method based on the batching method they use.

Iv-B1 Under random reshuffling

SGD takes disjoint data batches as input within an epoch with random reshuffling. We note that the existing results [17, 9, 15] on the composition of a sequence of differential private mechanisms assume that each mechanism runs with the same dataset as input. It is expected that their composition has less cumulative privacy loss if each of differentially private mechanisms runs on disjoint datasets. The formal composition result in this scenario is detailed in Theorem 2. All the proofs of lemmas and theorems in this paper can be found in Appendix.

Theorem 2.

Suppose that a mechanism consists of a sequence of adaptive mechanisms, , where each and satisfies -zCDP (). Let be the result of a randomized partitioning of the input domain . The mechanism satisfies

(11)

Theorem 2 provides a tighter characterization of privacy loss for the composition of mechanisms having disjoint input data. Assume for all , . Given Theorem 2, it is trivial to demonstrate that mechanism satisfies -zCDP. Compared with a guarantee of -zCDP using the composition property in Proposition 1, we demonstrate that the total privacy loss of sequential computations on disjoint datasets is just , equivalent with one computation step in the sequence. For -DP, the similar result in [30] of the parallel composition theorem says that when each -differentially private mechanism queries a disjoint subset of data in parallel and work independently, their composition provides -DP instead of the derived from a naive composition.

In the differentially private mini-batch SGD algorithm shown in Algorithm 1, each iteration step satisfies -zCDP according to Proposition 2. Suppose that each iteration from the same epoch uses the same noise scale and each uses a disjoint data batch. Then, by Theorem 2, we know that the computation of this epoch still satisfies ()-zCDP. Because the training dataset is repeatedly used every epoch, the composition of the epoch level computations follows normal composition of Proposition 1. Thus, when the training runs a total of epochs and each epoch satisfies -zCDP, the whole training procedure satisfies ()-zCDP.

Iv-B2 Under random sampling with replacement

We have shown a privacy amplification effect resulting from the disjoint data access of every iteration within one epoch under random reshuffling. In contrast, the MA method [7] exploits the privacy amplification effect of random sampling with replacement. In this section, we examine how random sampling with replacement affects the privacy loss in terms of zCDP. Intuitively, the random sampling with replacement introduces more uncertainty than the random reshuffling process which samples data batches without replacement. However, our analysis shows that CDP cannot characterize the privacy amplification effect of random sampling. It is because of the restrictive notion of sub-Gaussianity in CDP which requires moment constraints on all orders, i.e., in Eq. (7). To address this problem, we propose a relaxation of zCDP and convert it to -DP. This then allows us to capture the privacy amplification of random sampling.

Suppose a new mechanism that runs -zCDP mechanism on a random subsample of dataset where each example is independently sampled with probability . Without loss of generality, we fix and consider a neighboring dataset = . we use to denote the sampling process over dataset , let be any subsample that does not include and . Because is randomly sampled with probability , is distributed identically as with probability , and as with probability . Therefore, the following holds:

(12)

By = due to , it is easy to prove that still satisfies -zCDP. The proof details can be found in the Appendix -D.

No privacy amplification for in terms of CDP. Consider

(13)

When , we have

(14)

This shows that the sampling does not produce any reduction with regard to on and still . Therefore, by definition, zCDP is not able to capture the privacy amplification effect of random sampling.

The reason there is no privacy amplification with random sampling on zCDP is that concentrated DP requires a sub-Gaussian distribution for privacy loss, and thus moments must be bounded by

at all orders . This is a fairly strong condition. Alternatively, it requires that a mechanism has -Rényi divergence bounded by for all . To demonstrate that, we assume a Gaussian mechanism that and . Denote by and by . It is trivial to show that and . We can then numerically compute and with and . Results are shown in Figure 1. is linear in because . However, we can see that has a changing point at . Before this changing point it is close to zero, because is very small and the two distributions and are close to each other. At higher orders, the sampling effect vanishes and the divergence increases at the rate of . This indicates that the privacy amplification effect from random sampling does not hold for all orders , and we cannot improve the privacy metric in terms of zCDP under the random sampling. On the other hand, Figure 1 suggests that we need to analyze the -Rényi divergence within a limited range of to capture the privacy amplification effect. We will show that having a bound on within a limited range of

makes it possible to capture the privacy amplification effect of random sampling. However, such constraint does not fit into the definition of CDP, because with the constraint the privacy loss variable follows sub-exponential distribution, a relaxation to sub-Gaussianity in the definition of CDP. Therefore, in this paper we address it by converting

-Rényi divergence under such relaxation to traditional (, )-DP.

Lemma 1.

Suppose that with . Consider a mechanism that runs a Gaussian mechanism adding noise over a random subsample where each example is independently sampled with probability , i.e., . Let , then, for two neighboring datasets and , the -Rényi divergence between and is given below:

(15)

Eq. (15) defines a sub-exponential variable, a relaxation to sub-Gaussianity, by putting constraints on . Such a relaxation is able to capture the privacy amplification effect of random sampling but it does not fit into the definition of CDP. Thus, we convert it to (, )-DP with the following theorem:

Fig. 1: -Rényi divergence under sampling(=0.01, =4)
Theorem 3.

Let in (15). If the mechanism has

(16)

for , it satisfies

(17)
(18)

Composition  Theorem 3 captures the privacy amplification of random sampling with replacement by converting the bound of -Rényi divergence within a limited range of to -DP. Following this, we consider the composition of a sequence of Gaussian mechanisms with random sampling. Suppose mechanisms, denoted by =,…, where each uses sampling ratio and noise scale . Because the constraint of in Eq. (15) depends on the sampling ratio and noise scale, we examine their composition in two cases:

1.) Each mechanism uses the same and . For , by the composition property of -Rényi divergence [9], we have . The conversion to -DP can be done by letting in (17) and (18) in Theorem 3.

2.) The sampling ratio and noise scale are different for each mechanism. Then, for each , we have for . To apply Lemma 1 and allow the composition of -Rényi divergence of mechanisms with different and , we constrain to the range . It is then clear that holds within this range. Letting and replacing and by and where in Eq. (17) and (18), we can still obtain the corresponding (,)-DP.

When random sampling is used for batching, Algorithm 1 follows the above method to estimate privacy loss in terms of -DP. In particular, the algorithm specifies a fixed and a total privacy budget , and at every iteration step , it updates and computes the corresponding cumulative privacy loss . If , the training is terminated and the final model satisfies -DP.

In summary, we have shown that CDP is not able to capture the privacy amplification effect of random sampling. We address this issue by bounding -Rényi divergence over a constrained range of instead of and convert to -DP. On one hand, our approach is similar to the MA approach [7] and it produces close but slightly loose estimation for privacy loss compared to that of MA with regard to -DP guarantee, as shown in our experiment in Section V-A. On the other hand, however, our approach starts from a different viewpoint and its computation by following Eq. (17) and (18) is easier than MA which needs to compute log moments of privacy loss variable for a range of . At the same time, our privacy accounting approach allows each iteration to use different sampling ratios and noise scales and is more efficient to compute than MA when applied with dynamic privacy budget allocation.

More importantly, we have provided formal analysis to show that the compositions of differential privacy under two batching methods are distinct. As demonstrated by our experimental results, this causes different privacy loss. Therefore, we argue that the privacy accounting method has to be chosen according to which data batching method is used. In our implementation, we focus on random reshuffling, because it is a common practice in the neural network implementation [5, 22]. In fact, several existing deep learning frameworks such as TensorFlow provide convenient random reshuffling APIs for generating batches. It is also numerically observed that random reshuffling outperforms its random sampling with replacement [23].

Iv-C DP Composition Under Dynamic Schedules

For pre-defined schedules, once the hyperparameters are specified, they follow the decay functions to update the noise scale without accessing the data and the model, and thus do not incur any additional privacy cost. Since the noise scale is updated by epoch, each iteration step within an epoch uses the same noise scale. Suppose the epoch uses the noise scale . Each iteration of epoch is then zCDP by Proposition 2, and the total privacy cost of epoch can be calculated by Theorem 2 or 3 depending on which batching method is used. Over the course of training, the cumulative privacy loss is updated at each epoch, and once the cumulative privacy loss exceeds the fixed privacy budget , the training is terminated. To achieve a target training time under a given total privacy budget, we can determine the exact values of hyperparameters for these schedules in advance before the training time.

For the validation-based schedule, the access to the public validation dataset does not incur additional privacy cost. With this schedule, the composition of differential privacy involves adaptive choices of the privacy parameter at every epoch, which is corresponding to the noise scale of the Gaussian mechanism. This means that the choice of privacy parameters itself is a function of the realized outcomes of the previous rounds. It has been shown by Rogers et al. [31] that the strong composition theorem for -DP fails to hold in this adaptive privacy parameter setting since the theorem requires the privacy parameters to be pre-defined ahead of time. To address this problem, they define the privacy loss as a random variable as done in Eq. (5) for CDP and develop the composition for -DP using privacy filters. Privacy filters provide a way to halt the computation with probability before the realized privacy loss exceeds . Our approach relies on zCDP which defines privacy loss as in Eq. (5) by nature and therefore the composition accumulating privacy cost with regard to Rényi divergence holds for the adaptive parameter settings.

V Experimental Results

In this section, we evaluate the proposed privacy accounting methods, and demonstrate the effectiveness of dynamic privacy budget allocation on different learning tasks. Our implementation is based on the TensorFlow implementation [1] of DP-SGD in the paper[7].

V-a Comparing Privacy Accounting Approaches

In Section IV-B we derive different privacy accounting methods for two data batching methods: random reshuffling (RF) and random sampling with replacement (RS). We refer to them as zCDP(RF) and zCDP(RS) respectively. To numerically compare them with other privacy accounting methods including strong composition [18] and the moments accountant (MA) method [7], we unify them into -DP. Following [7], we assume that the batches are generated with RS for both the strong composition and MA. We use the implementation of [7] in TensorFlow to compute MA. For strong composition, we apply the strong composition theorem in [18] to the composition of -DP mechanisms that are the privacy amplified version of -DP mechanisms running with random sampling ratio . We compute -DP for zCDP(RF) with Proposition 1 and for zCDP(RS) with the methods in Section IV-B2.

In our experimental setting, we assume a batch size for random reshuffling. For random sampling we assume a sampling ratio given a total of samples. In the following, when we vary , it is equivalent to change the batch size for random reshuffling. The number of iterations in one epoch is . For simplicity, we use the same noise scale for Gaussian mechanism for every iteration and set and by default. Given , the Gaussian mechanism satisfies both (, )-DP and -zCDP. We track the cumulative privacy loss by epoch with different privacy accounting methods and convert the results to in terms of (-DP with fixed .

Figure 3 shows the growth of privacy loss metric during the training process. It shows that zCDP(RF) has lower estimation on privacy loss than that of the strong composition during the training. The final spent at epoch 400 by zCDP(RF) and strong composition are 21.5 and 34.3 respectively. Although random sampling introduces higher uncertainty and thus less privacy loss than random shuffling, zCDP(RF) still achieves lower and thus tighter privacy loss estimation even than the strong composition with random sampling. This demonstrates the benefit of CDP for composition of a large number of computations. The results for zCDP(RS) and MA are very close to each other because that they both exploit the moment bounds of privacy loss to achieve tighter tail bound and take advantage of the privacy amplification of random sampling. The final spent is 2.37 and 1.67 for zCDP(RS) and MA respectively. The reason for zCDP(RS) to have a slightly higher estimation is that its conversion to -DP explicitly uses an upper bound for the log moment instead of the numerical computation of log moments. The benefit of zCDP(RS) is that it is easy to compute especially under dynamic privacy budget allocation.

Figure 3 shows that zCDP(RF) has higher privacy loss compared to MA and zCDP(RS), because more uncertainty is introduced with RS. However, it is worth noting that the common practice in deep learning is to use RF, including the implementation of [7]. Thus, zCDP(RF) is the proper choice for them and also straightforward due to the composition property of -zCDP which simply adds up on values. The results show that MA could underestimate the real privacy loss when treating the random reshuffling as random sampling with replacement.

(a) v.s.
(b) v.s.
Fig. 2: Privacy parameter v.s. epoch
Fig. 3: privacy loss v.s. sampling ratio & noise scale
Fig. 2: Privacy parameter v.s. epoch

We further examine how zCDP(RF) and zCDP(RS) change with the sampling ratio and the noise scale . Using the default , Figure 2(a) shows the privacy loss at the end of 200 training epochs with varying values. For zCDP(RF), the cumulative privacy loss does not change with . This is because the composition of -zCDP iterations within one epoch still satisfies -zCDP by Theorem 2 and across epochs the linear composition of -zCDP in Theorem 1 is applied, which makes the final privacy loss depend exclusively on the number of training epochs. We have fixed 200 epochs so the final privacy loss does not change. In contrast, the privacy loss given by zCDP(RS) increases with the sampling ratio , which can be seen in Eq. (17) where increases with which is proportional to . Similarly, Figure 2(b) shows the privacy loss after 200 epochs by varying noise scales with the same =. We observe that increasing from 5 to 14 significantly reduces for zCDP(RF) but has noticeably less impact on zCDP(RS). It suggests that under random sampling, a small sampling ratio contributes much more on privacy than the noise scale . This indicates that we may reduce the noise scale to improve the model accuracy without degrading much privacy. However, for random reshuffling, the privacy loss does not depend on the sampling ratio (i.e., the batch size) but is decided by , so it is more critical to achieve a good trade-off between privacy and model accuracy in this case. Our privacy budget allocation techniques optimize this trade-off by dynamically adjusting during the training to improve model accuracy while retaining the same privacy guarantee.

V-B Evaluating Dynamic Privacy Budget Allocation

In this section we evaluate the effectiveness of dynamic privacy budget allocation compared to uniform privacy budget allocation adopted by Abadi et al. [7]. Since the TensorFlow implementation uses random reshuffling to generate batches, privacy accounting in Section IV-B1 should be used to avoid the underestimation of privacy loss. We therefore use as the metric to represent the privacy budget and loss. Because the techniques for adjusting noise scales are independent of the batching method, the benefit of dynamic privacy budget allocation on model accuracy demonstrated under random reshuffling also applies to random sampling.

V-B1 Datasets and Models

Our experiments use three datasets and different default neural networks for each dataset.

MNIST. This is a dataset of handwritten digits consists of 60,000 training examples and 10,000 testing examples [26] formatted as 28X28 size gray-level images. In our experiment, the neural network model for MNIST follows the settings in previous work [7]

for comparison: a 60-dimensional PCA projection layer followed by a simple feed-forward neural network comprising a single hidden layer of 1000 ReLU units. The output layer is softmax of 10 classes corresponding to the 10 digits. The loss function computes cross-entropy loss. A batch size 600 is used. The non-private training of this model can achieve 0.98 accuracy with 100 epochs.

Cancer Dataset. This dataset [2]

consists of 699 patient examples. Each example has 11 attributes including an id number, a class label that corresponds to the type of breast cancer (benign or malignant), and the 9 features describing breast fine-needle aspirates. After excluding 16 examples with missing values, we use 560 examples for training and 123 examples for testing. A neural network classifier with 3 hidden layers, containing 10, 20, and 10 ReLU units, is trained to predict whether a breast tumor is malignant or benign. Each iteration takes the whole training data set as a batch and thus each iteration is one epoch. The non-private training of this model achieves testing and training accuracy 0.96 with 800 epochs.

CIFAR-10. The CIFA-10 dataset consists of 3232 color images with three channels (RGB) in 10 classes including ships, planes, dogs and cats. Each class has 6000 images. There are 40,000 examples for training, 10,000 for testing and 10,000 for validation. For experiments on CIFAR, we use a pre-trained VGG16 neural network model [35]. Following the previous work [7]

, we assume the non-private convolutions layers that are trained over a public dataset (ImageNet 

[4]

for VGG16) and only retrain a hidden layer with 1000 units and a softmax layer with differential privacy. We use 200 training epochs and batch size of 200. The corresponding non-private training achieves 0.64 training accuracy and 0.58 testing accuracy.

Uniform[7] ( =8) Time (=0.05) Step(=0.6, =10) Exp (=0.01) Poly(=3,=2, =100) Validation(=0.7,=5, =0.01,=10)
epochs 100 38 31 71 44 64
training accuracy 0.918 0.934 0.928 0.934 0.930 0.930
testing accuracy 0.919 0.931 0.929 0.929 0.932 0.930
non-private SGD 0.978/0.970 0.959/0.957 0.955/0.954 0.971/0.965 0.963/0.959 0.97/0.964
uniform/same #epochs 0.922/0.925 0.921/0.925 0.922/0.925 0.925/0.929 0.924/0.926
TABLE I: Budget Allocation Schedules under Fixed Budget , initial noise scale for dynamic schedules

V-B2 Results on MNIST

In differentially private model training, we keep the batch size at 600, clip the gradient norm of every layer at 4, and use fixed noise scale for differentially private PCA. Note that, since the PCA part has constant privacy cost in terms of -zCDP, we exclude it from the total privacy budget in our experiment, i.e., is only for the DP-SGD in Algorithm 1. A constant learning rate 0.05 is used by default. We evaluate the model accuracy during training under different privacy budget allocation schedules. For the validation-based schedule, we divide the training dataset into 55,000 examples for training and 5000 examples for validation, and perform validation every epoch.

The results in Table I demonstrate the benefit of dynamic privacy budget allocations and the effect of earlier training termination on the model accuracy. For comparison, we consider the uniform privacy budget allocation in [7] with a constant noise scale =8 for every epoch as our baseline. We choose a fixed total privacy budget =0.78125. This results in 100 training epochs in the baseline case. We test all dynamic schedules with the hyperparameters given in the table and present their testing and training accuracy in numbers rounded to two decimals. The training is terminated when the privacy budget runs out, and the hyerparameters are chosen from a set of candidates to demonstrate varied training times in epochs which are reported in the table. We also ran a non-private version of SGD with using the same training time as these schedules to see the impact of DP-SGD on accuracy. We can see from Table I that the baseline with constant =8 achieves 0.918 training accuracy and 0.919 testing accuracy. By comparison, all non-uniform privacy budget allocation schedules improve the testing/training accuracy by 1%1.6% while running fewer epochs. Because DP-SGD is a randomized procedure and the numbers in the table vary among trials, we repeat all the experiments 10 times and report in Figure 6 the mean accuracy along with the min-max bar for every schedule. These results show that dynamic schedules consistently achieve higher accuracy than the baseline. Therefore, given a fixed privacy budget, the dynamic budget allocation can achieve better accuracy than using the uniform budget allocation.

The accuracy improvement shown with dynamic schedules in the above example comes from two sources: less training time and non-uniformity of budget allocation (i.e., decaying of the noise scale). With uniform allocation under a fixed privacy budget, reducing the training time increases the privacy budget allocated to every epoch and thus decreases the noise scale used by the Gaussian mechanism. Therefore, when the model training benefits more from the reduced perturbation rather than longer training time, using less training time can improve the model accuracy. As verification, we apply uniform budget allocation with the noise scale to achieve the same training time as the corresponding dynamic schedule. The training accuracy and testing accuracy are presented respectively in the final row of Table I. All cases outperform the baseline case with with less than 100 training epochs, indicating the benefit of trading the training time for more privacy budget per epoch. however, it is worth noting that in certain cases, increasing the noise scale to prolong the training time may help improve the accuracy, exemplified by the result of the validation-based schedule on the Cancer dataset. Overall, when compared with the uniform allocation under the same training time, dynamic schedules demonstrate higher accuracy, therefore illustrating the benefit of non-uniformity and dynamic budget allocation.

Fig. 4: The change of noise scale during training

Figure 4 shows how the noise scale changes with the epochs under different schedules in Table I. The curves terminate at the end of the training due to the depleted privacy budget. For the validation-based decay, the duration of the noise scale keeping unchanged decreases over the training time. The noise scale keeps 10 for 29 epochs, 7 for 20 epochs and 4.9 for 10 epochs. It is because that, as the model converges, the increment rate of the validation accuracy declines and it is more often to find that the accuracy increment does not exceed the given threshold.

We additionally manipulate different hyperparamters individually while keeping the rest constant to demonstrate their effects on training/testing accuracy and training time.. By default all accuracy numbers are the average of five trials.

The effect of decay functions.  In our previous experiments, we evaluate four types of decay functions for the pre-defined schedules. Here we compare their effects on the model accuracy with constant training time. Given the total privacy budget and initial noise scale, we can use the composition theorem of -zCDP to search for proper values of the parameter in these functions for the schedule to achieve a target training time. Table II in Appendix -F provides the values of for different decay functions to achieve training times of 60, 70, 80, 90, and 100 epochs respectively. These values are derived via search in step size , with the same initial noise scale and other parameters as stated in Table I.

Fig. 5: The accuracy comparison of different schedules
Fig. 6: The accuracy under fixed training time
Fig. 5: The accuracy comparison of different schedules

Figure 6 shows the training and testing accuracy of pre-defined schedules under different training times along with the accuracy achieved by uniform budget allocation [7] using the same privacy budget. We observe that all training instances using pre-defined schedules achieve higher accuracy compared to the uniform budget allocation given a fixed training time. However, there is no clear winner among the different decay functions. Their accuracy increases from 30 to 50 or 60 epochs and then decreases as the training time increases from 50 or 60 to 100 epochs. At 100 epochs, all pre-defined schedules have an accuracy closer to that of the uniform budget allocation schedule running 100 epochs. Due to the similar behaviors of the different decay functions, we choose to simply set the decay function to the exponential decay in subsequent experiments unless otherwise stated.

Decay rate.  The decay rate decides how fast the noise scale decays. Keeping other parameters as the same as those reported in Table I, we vary from 0.005 to 0.5 for exponential decay and from 0.3 to 0.9 for validation-based decay. Figure 10 and 10 show the accuracy and training time under different values of . We make three observations. First, in both cases, there exists an optimal decay rate to achieve maximum accuracy. For exponential decay, the best accuracy occurs at =0.2; for validation-based decay, it occurs at =0.7. Second, for exponential decay, with the increase of , the noise scale decreases at a higher rate. The privacy budget is therefore spent faster and the training time strictly decreases. For the validation-based decay, it reduces the noise scale to a fraction of the original, so the decay rate is actually 1- and the training time should increase with . Figure 10 shows that the training time overall increases with but with some random fluctuations. This is because that the validation-based decay adjusts the noise scale according to the validation accuracy which may change during the training in a non-deterministic way.

Third, the results at the ends of the x-axis in both figures indicate two interesting facts. At one end, although the lowest decay rate leads to the longest training time, it also produces the worst accuracy because the noise scale decays more slowly in this case and therefore more epochs will suffer relatively higher noise scales. This degrades the efficiency of the learning process and lowers the accuracy. At the other end of the axis, the highest decay rate causes the training to stop much earlier, resulting in an insufficient training time which also degrades the accuracy.

Fig. 7: exponential decay
Fig. 8: validation-based
Fig. 9: learning rate
Fig. 10: hidden units

Learning rate.  Next we fix the decay rate = for exponential decay. Given a privacy budget 0.78125, the training lasts 60 epochs. With an initial learning rate 0.1, we linearly decrease the learning rate to over 10 epochs and then fix it at thereafter. We vary from 0.01 to 0.07. Figure 10 shows that the accuracy decreases significantly when the learning rate is too small or too large.

Number of hidden units/layers. Next, we vary the number of hidden units in the model from 200 to 1600. The results are shown in Figure 10. Although more hidden units increase the sensitivity of the gradient, leading to more noise added at each iteration, we observe that increasing the number of hidden units does not decease the model accuracy under the exponential decay schedule. This is consistent with the observation in [7] using uniform budget allocation. This shows that the effectiveness of dynamic budget allocation schedules scales to neural networks of different sizes. We also vary the number of hidden layers from 1 to 3, each with 1000 hidden units. The accuracy results are given in Appendix -E and are consistent with [7] under the uniform allocation wherein the authors claim that, for MNIST, one hidden layer combined with PCA works better than networks with more layers.

Initial noise scale.  In the previous experiments, the initial noise scale = is set as the default. To examine the effect of , we vary its value from 7 to 20 and measure the model accuracy under the fixed privacy budget 0.78125 in two cases: 1) for each , we choose the exponential decay rate to achieve a fixed training time of 60 epochs; 2) the exponential decay rate is fixed to 0.015, leading to variation in training time. Figure 10(a) and 10(b) show that, overall, increasing the initial noise scale reduces accuracy. In comparing the two figures, we observe that when the training time is fixed, the choice of has less impact on accuracy. This is because, when the training time is fixed, a larger results in a higher decay rate of the noise scale which benefits accuracy. However, for fixed decay rate, although higher leads to more training epochs, there is no accuracy improvement. This indicates that the model accuracy is more sensitive to the noise scale than the training time.

Accuracy and privacy in training. Figure 15 and 15 illustrate the change of model accuracy and privacy loss during training time for schedules with the same parameters as in Table I except the parameters explicitly noted in the figures. Figure 15 shows that the uniform privacy budget allocation in [7] incurs linear growth of privacy loss in terms of -zCDP while our dynamic budget allocation schedules have faster growth rate due to the reduction of noise scale with time. All instances stop when the given total privacy budget of 0.78125 is reached. Combined with Figure 15, we can see that the exponential decay schedule consistently achieves better accuracy before the training ends compared to the uniform allocation, thanks to its faster noise scale reduction, while the validation-based schedule performs more conservatively and has a relatively longer training time.

An important implication of Figure 15 is that the gap between uniform budget allocation and non-private SGD indicates the maximum potential for the accuracy improvement through dynamic budget allocation over the uniform allocation. The proposed dynamic budget allocation schedules provide users a way to improve the accuracy of DP-SGD to approach that of non-private SGD. It is not possible for dynamic budget allocation to completely close this gap because gradient perturbation inevitably hurts model accuracy. Therefore, we argue that the effectiveness of dynamic privacy budget allocation should be evaluated on how much it can reduce the gap between non-private SGD and DP-SGD with uniform allocation. In our experiment, the accuracy difference between non-private SGD and DP-SGD of uniform case is 0.05 at the end of training. The exemplified schedules reduce this difference by 20%30%. One of our ongoing research directions is to investigate the ways to effectively find the best hyperparameters to apply these schedules.

(a) fixed training time
(b) fixed decay rate
Fig. 11: Initial noise scale
Fig. 12: Accuracy in training
Fig. 13: Privacy in training
Fig. 14: Accuracy (Cancer)
Fig. 15: Accuracy (Cifar-10)

V-B3 Results on other datasets

We repeat the experiments on the Cancer Dataset and CIFAR-10 datasets. By applying exponential decay and validation-based decay to each learning task, we compare corresponding model accuracy with the uniform allocation method [7]. In this set of experiments, we first consider a uniform schedule that uses a constant noise scale to achieve a desired training time under the given privacy budget. Then we choose a value around this noise scale as the initial noise scale for decay schedules. A set of candidates for the decay rate is evaluated, and we use each candidate to train a model and compare achieved model accuracy. The parameters for the schedules we used are given in Table IV in Appendix -G.

Results for testing and training accuracy are show in Figures 15 and 15. For the Cancer dataset, the exponential decay produces the model accuracy closer to the non-private SGD, about 3% higher accuracy than the uniform allocation case, and reduces the gap between non-private SGD and DP-SGD with uniform allocation by 70%. The validation-based schedule produces about 1.8% higher accuracy than the uniform case, with taking advantage of a longer training time as shown in Figure 15. For CIFAR-10, the exponential decay achieves 2% higher accuracy than the uniform case, and reduces the gap by about 12%. The validation-based schedule improves model accuracy by 4% over the uniform case, and reduces the gap by about 19%.

Vi Discussion

We discuss a number of nuances/caveats as take-away remarks for deploying differentially private deep learning in practice for model publishing.

Understanding privacy parameter. Although differential privacy (DP) as a theory has evolved through different forms, today it is still not clear how a realistic privacy benefit can be realized as a function of the privacy parameters in the DP definitions such as the and parameters in traditional DP and the in zCDP. These privacy parameters lack understandable interpretations to the end-users. For -zCDP, results like Proposition 1 would help if and had straightforward privacy-related interpretations. Advancement in interpretability and usability of DP parameters by end-users and domain-scientists can have profound impact on the practical deployment of differential privacy.

Data Dependency. The characteristics of input data, for example, dependency among training instances or dependency in the presence of training instances can render a differentially private mechanism ineffective for protecting the privacy of individuals [25, 24, 28]. The baseline definition of differential privacy is focused on the privacy of a single instance and therefore when multiple instances of the same user are present, a DP mechanism needs to be extended to group-level differential privacy to provide sufficient protection. One direction of our future work is to investigate and explore the ways of extending our DP-SGD techniques to provide a group-level privacy guarantee.

Resilience to Privacy Risks and Attacks. Differentially private deep learning aims to compute model parameters in a differentially private manner to limit the privacy risk associated with output model parameters. There are a number of known attacks in deep learning such as model inversion attacks and membership inference attacks. Model inversion attacks exploit the prediction output along with model access to infer an input instance. Membership inference attacks exploit the black box access to the prediction API to infer the membership of individual training instances. However, there is no formal study on whether or not a differentially private deep learning model is resilient to such attacks and what types of privacy risks known in practice can be protected with high certainty by a differentially private DNN model. In fact, DP only absolves the differentially-private release as a (quantifiably) strong cause of an inference. The work [21] provides an upper bound on the inferential privacy guarantee for differentially private mechanisms. DP, however, does not prevent the inference. This is another grand challenge in differential privacy and data privacy in general.

Vii Related Work

Privacy threats in machine learning Existing works [20, 19, 34, 36] have shown that machine learning models and their usage may leak information about individuals in the training dataset and input data. Fredrikson, et al. [20]

proposed a model inversion attack, which uses the output/prediction produced by a model to infer the unknown features of the input data and apply this attack against decision trees and neural networks in a pharmacogenetics scenario 

[19]. Reza et al. [34] developed a membership inference attack that aims to determine if an individual record was used as part of the training dataset for the model using only the black-box access to the target model. Song et al. [36] proposed training phase attacks which perform minor modifications to training algorithms to make them output models which encode a significant amount of information about the training dataset while achieving high quality metrics like accuracy and generalizability. In addition, model extraction attacks proposed in [38] aim to duplicate the functionality of the model with black-box access. Such attacks can be leveraged to infer information about the model’s training dataset.

Privacy-preserving deep learning To enable deep learning over the data from multiple parties while preserving the privacy of each party’s training dataset, Reza et al  [33] proposed a distributed deep learning framework that lets the participants train their model independently on their own dataset and only selectively share a subsets of their models’ parameters during training. Abadi et al. [7] proposed a differentially private SGD algorithm for deep learning to offer provable privacy guarantees on the output model. DP [14]

as a defacto standard for privacy has been applied to various machine learning algorithms, such as logistic regression 

[10, 42]

, support vector machines 

[32] and risk minimization [11, 8], aiming to limit the privacy risk associated with the output model parameters on the training dataset. Our work in this paper is primarily related to [7]. We improve their approach in a number of ways. For example, instead of using traditional -differential privacy, we apply concentrated differential privacy [17, 9] to provide tight cumulative privacy loss estimation over a large number of computations. Furthermore, we characterize the effect of data batching methods on the composition of differential privacy and propose a dynamic privacy budget allocation framework for improving the model accuracy.

Viii Conclusion

We have presented our approach to differentially private deep learning for model publishing with three original contributions. First, since the training of neural networks involves a large number of iterations, we apply CDP for privacy accounting to achieve tight estimation on privacy loss. Second, we distinguish two different data batching methods and propose privacy accounting methods for each to enable accurate privacy loss estimation. Third, we have implemented several dynamic privacy budget allocation techniques for improving model accuracy over existing uniform budget allocation schemes. Our experiments on multiple datasets demonstrate the effectiveness of dynamic privacy budget allocation.

Acknowledgment

The authors would like to thank our anonymous reviewers for their valuable comments and suggestions. This research was partially sponsored by NSF under grants SaTC 156409, CISE’s SAVI/RCN (1402266, 1550379), CNS (1421561), CRISP (1541074), SaTC (1564097) programs, an REU supplement (1545173), an RCN BD Fellowship, provided by the Research Coordination Network (RCN) on Big Data and Smart Cities, an IBM Faculty Award, and gifts, grants, or contracts from Fujitsu, HP, Intel, and Georgia Tech Foundation through the John P. Imlay, Jr. Chair endowment. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies and companies mentioned above.

References