I Introduction
What does it mean for data to be anonymized? In 1997, Samarati and Sweeney discovered that removing explicit identifiers from dataset records was not enough to prevent information from being reidentified [1, 2], and they proposed the first definition of anonymization. This notion, called anonymity, is a property of a dataset: each combination of reidentifying fields must be present at least times. In the following decade, further research showed that sensitive information about individuals could still be leaked when releasing anonymous datasets, and many variants and definitions were proposed [3, 4, 5].
One common shortcoming of these approaches is that they defined anonymity as a property of the dataset: without knowledge of how the dataset is generated, arbitrary information can be leaked. This approach was changed in 2005 when Dwork et al. [6, 7] introduced differential privacy (DP): rather than being a property of the sanitized dataset, anonymity was instead considered to be a property of the process. It is a formalization of Dalenius’ privacy goal that “Anything about an individual that can be learned from the dataset can also be learned without access to the dataset” [8], a goal similar to one already used in probabilistic encryption [9].
DP quickly became the gold standard of privacy definitions. Many data mining algorithms and processing tasks were adapted to satisfy it, and were adopted by organizations like Google [10], Apple [11] and Microsoft [12].
However, since the original introduction of DP, many variants and extensions have been proposed to adapt it to different contexts or assumptions. These new definitions enable practitioners to get privacy guarantees, even in cases that the original DP definition does not cover well. This happens in a variety of scenarios. The noise mandated by DP can be too large, and force the data custodian to consider a weaker alternative. The attacker model might require the data owner to consider correlations in the data explicitly, or to make stronger statements on what information the privacy mechanism reveals.
Figure 1 shows the prevalence of this phenomenon: more than 100 different notions, inspired by DP, were defined in the last 14 years. These privacy definitions can be extensions or variants of DP. An extension encompasses the original DP notion as a special case, while a variant changes some aspect, typically to weaken or strengthen the original definition. The number of papers and the corresponding novel privacy definitions seems to grow over time, as we show in Figure 1.
With so many definitions, it is difficult for new practitioners to get an overview of this research area. Many definitions have very similar goals, so it is also challenging to understand when it is appropriate to use which variant of DP, and which one to choose for a given use case. These difficulties also affect experts: several variants listed in this work have been defined independently multiple times, often with different names and without comparing them to related notions.
This work is an attempt to solve these problems. By providing a comprehensive taxonomy of variants and extensions of DP, we hope to make it easier for new practitioners to understand whether their use case needs an alternative definition, and if so, which are the most appropriate and what their basic properties are. By categorizing these definitions, we hope to simplify our understanding of existing variants and relations between them.
Contributions and organization
We systematize the scientific literature on variants and extensions of DP, and propose a unified and comprehensive taxonomy of these definitions. We define seven dimensions: these are ways in which the original definition of DP can be modified. Moreover, we highlight the most important definitions from each dimension, and for the main definitions we enlist whether they satisfy Kifer et al.’s privacy axioms [13], (postprocessing and convexity), and whether they are composable. Our survey is organized as follows:

In Section II, we recall the original definition of DP and introduce our dimensions along which DP can be modified.

In Section III, we review the methodology and scope of this survey work.

In Section XI, we present some properties of DP, and we highlight whether they hold for the main DP relaxations in a summary table. Furthermore, we also show how those definitions relate to each other.
Ii DP and its Dimensions
Table I summarizes the notations used throughout the paper.
Notation  Description 

Set of possible records  
A possible record  
Set of possible datasets (sequences of records)  
Dataset (we also use , , …)  
th record of the dataset ()  
Dataset , with its th record removed  
Probability distribution on  
Family of probability distributions on  
Probability distribution on  
Set of possible outputs of the mechanism  
Subset of possible outputs  
Output of the privacy mechanism  
Privacy mechanism (probabilistic)  
The distribution (or an instance of this distribution)  
of the outputs of given input 
The first DP mechanism, randomized response, was designed in the 1960s [14], and privacy definitions that are a property of a mechanism and not of the output dataset were already proposed in the early 2000s [15]. However, DP and the related notion of indistinguishability were first formally defined in an academic paper in 2006 [7, 16], shortly after being proposed in a patent [6].
Definition 1 (indistinguishability [16]).
Two random variables
and are indistinguishable, denoted , if for all measurable sets of possible events:Informally, and are indistinguishable if their distributions are “close”. This notion is then used to define DP^{1}^{1}1A similar notion, privacy, is defined in [17], where used in place of ..
Definition 2 (differential privacy [7]).
A privacy mechanism is differential private (or DP) if for all datasets and that differ only in one record, .
A few factors contributed to the success of DP. It provides a quantifiable guarantee on the maximum knowledge that an attacker can get about any information about an individual record. This guarantee can be formulated using Bayesian inference (see Section
VIIIA), assuming a powerful Bayesian attacker. In particular, DP is resistant to auxiliary knowledge: the attacker can know all other records of the dataset. DP is also composable: releasing the output of two DP mechanisms is itself a DP mechanism. When DP was first introduced, existing privacy definitions did not satisfy any of these properties.Iia Dimensions
To establish a comprehensive taxonomy of variants and extensions of DP, one natural approach is to classify them into
categories, depending on which aspect of the definition they change. Unfortunately, this approach falls short for privacy definitions, many of which modify several aspects at once: it seems impossible to have a categorization such that every definition falls neatly into only one category.The approach we take is to define dimensions along which the original definition can be modified. Each variant or extension of DP can be seen as a point in a multidimensional space, where each coordinate corresponds to one possible way of changing the definition along a particular dimension. To make this representation possible, our dimensions need to satisfy two properties:

Mutual compatibility: Two privacy definitions which vary along different dimensions can be combined to form a new, meaningful privacy definition.

Inner exclusivity: Two definitions from the same dimension cannot be combined to form a new, meaningful privacy definition; however, they might be pairwise comparable.
In addition, each dimension should be motivatable: there needs to be an intuitive explanation of what it means to modify the original definition along each dimension. Further, ideally, each possible choice within a dimension should be similarly understandable, to allow new practitioners to determine quickly which kind of definition they should use or study, depending on their use case.
To introduce our dimensions, we formulate explicitly the guarantee offered by DP in Definition 2, and we highlight every aspect that has been modified by some variant.
An attacker with perfect background knowledge (B) and
unbounded computation power (C) is unable (R)
to distinguish (D) anything about an individual (N),
uniformly (V) across users, even in the
worstcase scenario (Q).
This informal definition of DP with the seven highlighted aspects give us seven distinct dimensions. We denote each one by a letter and summarize them in Table II. Each of them is introduced in its corresponding section.
Dimension  Description  Motivations 

Quantification of Privacy Loss  How is the privacy loss quantified across outputs?  Averaging risk, having better composition properties 
Neighborhood Definition  Which properties are protected from the attacker?  Protecting specific values or multiple individuals 
Variation of Privacy Loss  Can the privacy loss vary across inputs?  Modeling users with different privacy requirements 
Background Knowledge  How much prior knowledge does the attacker have?  Using less noise in the mechanism 
Definition of Privacy Loss  Which formalism is used to describe the attacker’s success?  Exploring other intuitive notions of privacy 
Relativization of Knowledge Gain  What is the knowledge gain relative to?  Guaranteeing privacy for correlated data 
Computational Power  How much computational power can the attacker use?  Using DP in a multiparty context 
Iii Scope and methodology
In this work, we consider variants and extensions of DP. Whether a data privacy definition fits this description is not always obvious, so we use the following criterion: the attacker’s capabilities must be clearly defined, and the definition must prevent this attacker from learning about a protected property. Consequently, we do not consider definitions which are a property of the output data and not of the mechanism, variants of technical notions that are not privacy properties (like different types of sensitivity), nor definitions whose only difference with DP is in the context and not in the privacy property itself (like the distinction between local and global models).
In Section XIIB, we list notions that we found during our survey, and considered to be out of scope for our work.
To find a comprehensive list of variants and extensions of DP, we used two research datasets: BASE (https://www.basesearch.net/) and Google Scholar (https://scholar.google.com/). The exact queries were run on October 31th, 2018, and the corresponding result count are summarized in Table III.
Query (BASE)  Hits 

“differential privacy” relax year:[2000 TO 2018]  99 
“differential privacy” variant relax year:[2000 TO 2018]  87 
Query (Google Scholar)  Hits 
“differential privacy” “new notion”  161 
“differential privacy” “new definition” “new notion”  129 
First, we manually reviewed each abstract to filter out papers that were completely unrelated to our work, until we had only papers which either contained a new definition or were applying DP in a new setting. All papers which defined a variant or extension of DP are cited in this work.
Iv Quantification of privacy loss (Q)
DP and its associated risk model is a worstcase property: it quantifies not only over all possible neighboring datasets but also over all possible outputs. However, in a typical reallife risk assessment, events with vanishingly small probability are ignored, or their risk is weighted according to their probability. It is natural to consider analogous relaxations, especially since these relaxations often have better composition properties, and enable natural mechanisms like the Gaussian mechanism to be considered private [18].
Most of the definitions within this section can be expressed using the privacy loss random variable^{2}^{2}2First defined in [19] as the adversary’s confidence gain., so we first introduce this important concept. Roughly speaking, it measures how much information is revealed by the output of a mechanism.
Definition 3 (Privacy loss random variable [19]).
Let be a mechanism, and and two datasets. The privacy loss random variable between and is defined as:
if neither nor is 0; in case only is zero then , otherwise .
Iva Allowing a small probability of error
The first option, whose introduction is commonly attributed to [20], relaxes the definition of indistinguishability by allowing an additional small density of probability on which the upper bound does not hold. This small density can be used to compensate for outputs for which the privacy loss is larger than . This led to the definition of approximated DP [20], also called DP. It is probably the relaxation most commonly used in the literature.
The in DP is sometimes explained as the probability that the privacy loss of the output is larger than (or, equivalently, that the indistinguishability formula is satisfied). This intuition, however, corresponds to a different definition, called probabilistic DP [21, 22, 23].
These two definitions can be combined to form relaxed DP [24], requiring approximated DP with probability .
IvB Averaging the privacy loss
As DP corresponds to a worstcase risk model, it is natural to consider relaxations to allow for larger privacy loss in the worst cases. It is also natural to consider averagecase risk models: allowing larger privacy loss values only if lower values compensate it in other cases. One such relaxation is called KullbackLeiber privacy [25, 26]: it considers the arithmetic mean of the privacy loss random variable, which measures how much information is revealed when the output of a private algorithm is observed.
Rényi DP [27] extends this idea by adding a parameter which allows controlling the choice of averaging function.
Definition 4 (Rényi differential privacy [27]).
Given , a privacy mechanism is Rényi DP if for all pairs of neighboring datasets and :
The property required by Rényi DP can be reformulated as , where is the Rényi divergence. It is possible to use other divergence functions to obtain other relaxations, such as binary and tenary DP [28], total variation privacy [25] or quantum DP [29].
Another possibility to average the privacy loss is to use mutual information to formalize the intuition that any individual record should not “give out too much information” on the output of the mechanism (or viceversa). This is captured by mutualinformation DP [26], which guarantees that the mutual information between and conditioned on is under a certain threshold, where is randomly picked from any distribution over datasets.
IvC Controlling the tail distribution of the privacy loss
Some definitions go further than simply considering a worstcase bound on the privacy loss, or averaging it across the distribution. They try to obtain the benefits of approximated DP with a smaller which holds in most cases, but control the behavior of the bad cases better than approximated DP, which allows for catastrophic privacy loss in rare cases.
The first attempt to formalize this idea was proposed in [30], where authors introduce concentrated DP
. In this definition, a parameter controls the privacy loss variable globally, and another parameter allows for some outputs to have a greater privacy loss; while still requiring that the difference is smaller than a Gaussian distribution. In
[31], the authors rename this definition to meanconcentrated DP, and show that it does not verify the postprocessing axiom (see Section XI). To fix this, they propose another formalization of this idea called zeroconcentrated DP, which requires that the privacy loss random variable is concentrated around zero.Definition 5 (zeroconcentrated differential privacy [31]).
A mechanism is zeroconcentrated DP if for all pairs of neighboring datasets and and all :
Four more variants of concentrated DP exist: approximate zero concentrated DP [31], Collinsonconcentrated DP^{3}^{3}3Originally called truncated concentrated DP, we rename it here to avoid a name collision. [29], bounded zero concentrated DP [31] and truncated concentrated DP [32]. The first takes the Rényi divergence on events with high enough probability instead of on the full distributions, the second requires all the Rényi divergences to be smaller than a threshold, while the last two requires this only for some Rényi divergences.
IvD Extensions
Most definitions of this section can be seen as bounding the divergence between and , for different possible divergence functions. In [25], the authors use this fact to generalize them and define divergence DP, which takes an arbitrary divergence as a parameter.
Further, approximated and Rényi DP can be extended to use a family of parameters rather than a single pair of parameters. As shown in [33] (Theorem 2), finding the tightest possible family of parameters (for either definition) for a given mechanism is equivalent to specifying the behavior of its privacy loss random variable entirely.
IvE Multidimensional definitions
Allowing a small probability of error using the same concept as in approximate DP is very common in the extensions and variants of DP proposed in the literature. Unless it creates a particularly notable effect, we do not mention it explicitly.
V Neighborhood definition (N)
The original DP definition considers datasets differing in one record. Thus, the datasets can differ in two possible ways: either they have the same size and differ only on one record, or one is a copy of the other with one extra record. These two options do not protect the same thing: the former protects the value of the records while the latter also protects their presence in the data: together, they protect any property about a single individual.
In many scenarios, it makes sense to protect a different property about their dataset, e.g., the value of a specific sensitive field, or entire groups of individuals. It is straightforward to adapt DP to protect different sensitive properties: all one has to do is change the definition of neighborhood in the original definition.
Va Changing the sensitive property
The original definition states that DP should hold for “any datasets and that differ only in one record”. Modifying the set of pairs such that is equivalent to changing the protected sensitive property.
In DP, the difference between and is sometimes interpreted as “one record value is different”, or “one record has been added or removed”. In [34], the authors formalize these two options as bounded DP and unbounded DP. They also introduced attribute DP and bit DP, for smaller changes within the differing record.
More restrictive definitions are also possible. group privacy, implicitly defined in [35], considers datasets that do not differ in one record, but possibly several. Hence, it protects a fixed number of individuals. The strongest possible variant is considered in [34] where the authors define free lunch privacy in which the attacker must be unable to distinguish between any two datasets, even if they are completely different. This guarantee is a reformulation of Dalenius’ privacy goal [8]; as such, all mechanisms that satisfy free lunch privacy have a neartotal lack of utility.
It is also possible to consider correlations between records. In many realworld datasets, the information about one individual is not only contained in their record, but can be indirectly present in other records. In [36], the authors model this via an extra parameter that describes the maximum number of records that the change of one individual can influence. This idea was further developed in dependent DP [37] via dependence relationships which describes how much the variation in one record can influence the other records. Equivalents to this definition also appear in [38, 39] as correlated DP, and in [40] as bayesian DP.
Another way to modify the neighborhood definition in DP is to consider that only certain types of information are sensitive. For example, if the attacker learns that their target has cancer, this is more problematic than if they learn that their target does not have cancer. This idea is captured in onesided DP [41]: the neighbors of a dataset are obtained by replacing a single sensitive record with any other record (sensitive or not). The idea of sensitivity is formalized by a policy , which specifies which records are sensitive.
Note that a similar idea was captured in [42] and in [43]. In [42], the authors adopts DP for graphs via protected DP, which provides privacy of the protected nodes while leaving the targeted nodes unprotected. In [43], authors defined anomalyrestricted DP, which provides DP only for nonanomalous points.
VB Limiting the scope of the definition
Redefining the neighborhood property can also be used to reduce the scope of the definitions. In [44], the authors note that DP requires indistinguishability of results between any pair of neighboring data sets, but in practice, the data custodian has only one data set they want to protect. Thus, they only require indistinguishability between this data set and all its neighbors, calling the resulting definition individual DP.
Definition 6 (individual differential privacy [45]).
Given a dataset , a privacy mechanism satisfies individual DP if for all that differ from in at most one record, .
This was further restricted in perinstance DP [45], where besides fixing a dataset , a record was also fixed.
VC Applying the definition to other types of input
Many adaptations of DP simply change the neighborhood definition to protect different types of input data than datasets.
DP was adopted to graphstructured data in [46, 47, 48, 49, 50, 51], to streaming data in [52, 53, 54, 55], to symbolic control systems in [56]
, to text vectors in
[57], to set operations in [58], to images in [59], to genomic data in [60], to recommendation systems in [61], to location data in [62], to outsourced database systems in [63], to RAMs in [64, 65, 66] and to Private Information Retrieval in [67]. We list the corresponding definitions in the full version of this work.VD Extensions
It is natural to generalize the variants of this section to arbitrary neighboring relationships. One example is mentioned in [34], under the name generic DP, where the neighboring relation is entirely captured by a relation between datasets.
Other definitions use different formalizations to also generalize the concept of changing the neighborhood relationship. Some use pairs of predicate that and must respectively satisfy to be neighbors [68]. Others use private functions, denoted priv, and define neighbors to be datasets and such as [69]. Others, like blowfish privacy [70, 71], use a policy graph specifying which pairs of tuple values must be protected. Others use a distance function between datasets, and neighbors are defined as datasets a distance lower than a given threshold; this is the case for DP under a neighborhood, introduced in [72]. This distance can also be defined as the sensitivity of the mechanism, like in sensitivity induced DP [73], implicitly defined by a set of constraints, like in induced DP [34].
VE Multidimensional definitions
Modifying the protected property is orthogonal to modifying the risk model implied by the quantification of privacy loss: it is straightforward to combine these two dimensions. Many definitions mentioned in this section were introduced with a parameter allowing for a small probability of error, or arbitrary bounds on the privacy loss. One particularly general example is adjacency relation divergence DP [74], which combines an arbitrary neighborhood definition (like in generic DP) with an arbitrary divergence function (like in divergence DP).
Vi Variation of privacy loss (V)
In DP, the privacy parameter is uniform: the level of protection is the same for all protected users or attributes, or equivalently, only the level of risk for the most atrisk user is considered. In practice, some users might require a higher level of protection than others or a data custodian might want to consider the level of risk across all users, rather than only considering the worst case. Some definitions take this into account by allowing the privacy loss to varying across inputs, either explicitly (by associating each user to an acceptable level of risk), or implicitly (by allowing some users to be at risk, or averaging the risk across users).
Via Varying the privacy level across inputs
In Section V, we saw how changing the definition of the neighborhood allows us to adopt the definition of privacy and protect different aspects of the input data. However, the privacy protection in those variants was binary: either a given property is protected, or it was not. A possible option to generalize this idea further is to allow the privacy level to vary across inputs. This can be seen as adapting DP to the local model, as each client can choose the level of desired privacy.
One natural example is to consider that some users might have higher privacy requirements than others, and make the vary according to which user differs between the two datasets. This is done in personalized [75, 76, 77, 78, 79] and heterogeneous DP [80].
Definition 7 (personalized differential privacy [76]).
A privacy mechanism provides personalized DP if for every pair of neighboring datasets and for all sets of outputs :
where is a privacy specification: maps the records to personal privacy preferences and denotes the privacy preference of the th record.
This definition can be seen as a refinement of the intuition behind onesided DP, which separated records into sensitive and nonsensitive ones. In [81], authors define tailored DP, which generalizes this further: the privacy level depends on the entire database, not only in the differing record.
This concept can be applied to strengthen or weaken the privacy requirement for a record depending on whether they are an outlier in the database. In
[81], the authors formalize this idea and introduce outlier privacy, which tailors an individual’s privacy guarantee to their “outlierness”. Further refinements such as simple outlier privacy, simple outlier DP and staircase outlier privacy are also introduced; all are instances of tailored DP.Finally, varying the privacy level across inputs also makes sense in continuous scenarios, where the neighborhood relationship between two datasets is not binary, but quantified, like geoindistinguishability [82].
ViB Randomizing the variation of privacy levels
Varying the privacy level across inputs can also be done in a randomized way, by guaranteeing that some random fraction of users have a certain privacy level. One example is proposed in [83] as random DP: the authors note that rather than requiring DP to hold for any possible datasets, it is natural to only consider realistic datasets, and allow “edgecase” datasets to not be protected. This is captured by generating the data randomly, and allowing a small proportion of cases not to satisfy the indistinguishability property.
Definition 8 (random differential privacy [83]).
Let be a probability distribution on , a dataset generated by drawing i.i.d. elements in , and the same dataset as , except one element was changed to a new element drawn from . A mechanism is random DP if , with probability at least on the choice of and .
The way how exactly this randomness are defined are changed in [84] and in [85] where the authors introduced predictive DP and modelspecific DP respectively.
This relaxation is similar to approximated DP: there is a small probability that the risk is unbounded. However, this probability is computed across users or datasets and not across mechanism outcomes. Other variants could be define to average the level of risk across users or datasets. Further, note that usually, datagenerating distributions are used for other purposes; and the records are not always independent and identically distributed. We come back to these considerations in Section VII.
ViC Multidimensional definitions
Varying the privacy level across users or randomly limiting the scope of the considered datasets are two possible directions that cannot be captured via the previously mentioned dimensions. It is thus possible to combine them.
ViC1 Combination with neighborhood definition
In the extensions of the previous section (e.g. generic DP or blowfish privacy), the privacy constraint is the same for all neighboring datasets. Thus, it cannot capture definitions that vary the privacy level across inputs. privacy is introduced in [86] to capture both ideas of varying the neighborhood definition and varying the privacy levels across inputs.
Definition 9 (privacy [86]).
Let . A privacy mechanism satisfies privacy if for all pairs of datasets , and all sets of outputs :
Equivalent definitions also appeared in [62] as privacy, and in [74] as extended DP. Several other definitions, such as weighted DP [87], smooth DP [25] and earth mover’s privacy [88] can be seen as instantiations of privacy for some distance functions .
ViC2 Combination with quantification of privacy loss
The idea of varying the privacy parameters depending on the input is also compatible with using another risk model than a worstcase quantification. For example, in [93], the author proposes endogeneous DP, which is a combination of approximated DP and personalized DP. Similarly, extended divergent DP, defined in [74], combines an privacy with divergence DP.
Randomly limiting the scope of the definition can also be combined with ideas from the previous sections. For example, in [94], authors introduce typical privacy, which combines random DP with approximated DP. In [95], authors introduce on average KL privacy [95], which uses KLdivergence as quantification metric, but only requires the property to hold for an “average dataset”, like random DP.
In [96], the authors introduce general DP^{4}^{4}4Originally called generic DP; we rename it here to avoid a name collision., which goes further and generalize the intuition from generic DP, by abstracting the indistinguishability condition entirely: the privacy relation is still the generalization of the neighborhood and the privacy predicate is the generalization of indistinguishability to arbitrary functions.
This definition was further extended via abstract DP, however, that definition does not satisfy the privacy axioms (see Section XI).
Vii Background knowledge (B)
In DP, the attacker is implicitly assumed to have full knowledge of the dataset: their only uncertainty is whether the target belongs in the dataset or not. This implicit assumption is also present for the definitions of the previous dimensions: indeed, the attacker has to distinguish between two fixed datasets and . The only source of randomness in the indistinguishability formula comes from the mechanism itself. In many cases, this assumption is unrealistic, and it is natural to consider weaker adversaries, who do not have full background knowledge. One of the main motivations to do so is to use significantly less noise in the mechanism [97].
The typical way to represent this uncertainty formally is to assume that the input data comes from a certain probability distribution (named “data evolution scenario” in [68]): the randomness of this distribution models the attacker’s uncertainty. Using a probability distribution to generate the input data means that the indistinguishability property cannot be expressed between two fixed datasets. Instead, one natural way to express it is to condition this distribution on some sensitive property such as in noiseless privacy [97, 98].
Definition 10 (noiseless privacy [97, 98]).
Given a family of probability distribution on , a mechanism is noiseless private if for all , all and all :
In [69], the authors argue that in the presence of correlations in the data, noiseless privacy can be too strong, and prevent the attacker from learning global properties of the data. To fix this problem, they proposed distributional DP, an alternative definition that only considers the influence of one user once the database has already been randomly picked from the datagenerating distribution. In this definition, one record is changed after the dataset has been generated, so it does not affect other records through dependence relationships.
Viia Multidimensional definitions
Limiting the background knowledge of an attacker is orthogonal to the dimensions introduced previously: one can modify the risk model, introduce different neighborhood definitions, or even vary the privacy parameters across the protected properties and change the attacker background knowledge as well.
ViiA1 Combinations with quantification of privacy loss
When modeling the attacker’s background knowledge, two options are possible: either consider the background knowledge as additional information given to the attacker or let the attacker influence the background knowledge. This distinction, outlined in [99], corresponds to the distinction between an active and a passive attacker. The authors show that this distinction does not matter if only the worstcase scenario is considered, like in noiseless privacy. However, under different risk models, such as allowing a small probability of error, they lead to two different definitions.
The first definition, active partial knowledge DP, quantifies over all possible values of the background knowledge. It was introduced in [98, 69] and reformulated in [99] to clarify that it implicitly assumes an active attacker.
The second definition, passive partial knowledge DP [99], is strictly weaker: it models a passive attacker, who cannot choose their background knowledge, and thus cannot influence the data.
ViiA2 Combinations with neighborhood definition
In both noiseless privacy and distributional DP, the two possibilities between which the adversary must distinguish are similar to bounded DP. Of course, other variants are possible: limiting background knowledge is orthogonal to choosing which properties to hide from the attacker. This is done in pufferfish privacy [68], which extends the concept of neighboring datasets to neighboring distributions of datasets.
Definition 11 (pufferfish privacy [68]).
Given a family of probability distributions on , and a family of pairs of predicates on datasets, a mechanism verifies pufferfish privacy if for all distributions and all pairs of predicates :
Pufferfish privacy starts with a set of datagenerating distributions, then conditions them on sensitive attributes. This notion extends noiseless privacy, as well as other definitions like bayesian DP [100], in which neighboring records only have a fraction of elements in common, and some are generated randomly.
It is possible to generalize this further by comparing pairs of distributions directly: in [101, 74], authors define distribution privacy for that purpose. Further relaxations from [74] are probabilistic distribution privacy (combination of distribution privacy and probabilistic DP), extended distribution privacy (combination of distribution privacy and privacy), divergent distribution privacy (combination of distribution privacy and divergent DP) and extended divergent distribution privacy (combination of the latter two options).
Viii Definition of privacy loss (D)
indistinguishability compares the distribution of outputs given two neighboring inputs. This is not the only way to encompass the idea that a Bayesian attacker should not be able to gain too much information on the dataset, and other formalisms have been proposed. These formalisms model the attacker explicitly, by formalizing their prior belief as a probability distribution over all possible datasets.
This change in formalism can be done in two distinct ways. Some variants consider a specific prior (or family of possible priors) of the attacker, implicitly assuming a limited background knowledge, like in Section VII. We show that these variants can be interpreted as changing the priorposterior bounds of the attacker. Another possibility compares two posteriors, quantifying over all possible priors. In practice, these definitions are mostly useful in that comparing them to DP leads to a better understanding of the guarantees that DP provides.
Viiia Changing the shape of the priorposterior bounds
DP can be interpreted as giving a bound on the posterior of a Bayesian attacker as a function of their prior. This is exactly the case in indistinguishable privacy, an equivalent reformulation of DP defined in [102]: suppose that the attacker is trying to distinguish between two options and , where corresponds to the option “” and to “
”. Initially, they associate a certain prior probability
to the first option. When they observe the output of the algorithm, this becomes the posterior probability
. From Definition 2, we have:and thus,
A similar, symmetric lower bound can be obtained. Hence, DP can be interpreted as bounding the posterior level of certainty of a Bayesian attacker as a function of its prior. We visualize these bounds in the top left side of Figure 2.
Some variants of DP use this idea in their formalism, rather than obtaining this as a corollary to the classical DP definition. For example, positive membership privacy [103] requires that the posterior does not increase too much compared to the prior. Like noiseless privacy [98], it assumes an attacker with limited background knowledge.
Definition 12 (positive membership privacy [103]).
A privacy mechanism provides positive membership privacy if for any distribution , any record and any :
Note that this definition is asymmetric: the posterior is bounded from above, but not from below. It is visualized the top right part of Figure 2. In the same paper, the authors also define negative membership privacy, which provides the symmetric lower bound, and membership privacy, which is the conjunction of the two. They show that this definition can represent DP as well as other definitions like differential identifiability [104] and sampling DP [105, 106], which we mention in Section XII.
A previous attempt at formalizing the same idea was presented in [107] as adversarial privacy. This definition is similar to positive membership privacy, except only the first relation is used, and there is a small additive as in approximated DP. We visualize the corresponding bounds on the bottom left of Figure 2.
ViiiB Comparing two posteriors
In [108], authors propose an approach that captures an intuitive idea proposed by Dwork in [7]: “any conclusions drawn from the output of a private algorithm must be similar whether or not an individual’s data is present in the input or not”. They define semantic privacy: instead of comparing the posterior with the prior belief like in DP, this bounds the difference between two posterior belief distributions, depending on which database was secretly chosen. The distance chosen to represent the idea that those two posterior belief distributions are close is the statistical distance. One important difference between the definitions in the previous subsection is that semantic privacy quantifies over all possible priors: like in DP, the attacker is assumed to have arbitrary background knowledge.
Definition 13 (semantic privacy [109, 108]).
A mechanism is semantically private if for any distribution over datasets , any index , any , and any set of datasets :
where is chosen randomly from .
Another definition with seemingly the same approach is proposed in [110] under the name posteriori DP; however, this definition does not make the prior explicit.
ViiiC Multidimensional definitions
Definitions that limit the background knowledge of the adversary explicitly formulate it as a probability distribution. As such, they are natural candidates for Bayesian reformulations. In [111], authors introduce identity DP, which is an equivalent Bayesian reformulation of noiseless privacy.
Another example is inferencebased distributional DP [69], which relates to distributional DP the same way as noiseless DP and its aposteriori version: they are same if , but the equivalence breaks when a small additive error is introduced to the definitions, in which case the inference and aposteriori based versions become weaker [69].
Further, it is possible to modify the neighborhood definition. In [112], authors introduce information privacy, which can be seen as a posteriori noiseless privacy combined with free lunch privacy: rather than considering the knowledge gain of the adversary on one particular user, it considers its knowledge gain about any possible value of the database.
Ix Relativization of the knowledge gain (R)
In classical DP, the attacker cannot increase their knowledge about an individual by more than a certain amount. In the context of highly correlated datasets, this might not be enough: data about someone’s friends might reveal sensitive information about this person. Changing the definition of the neighborhood is one possibility (see Section VA), but a more robust option is to impose that the information released does not contain more information than the result of some predefined algorithms on the data, without the individual in question. The method for formalizing this intuition borrows ideas from zeroknowledge proofs [113].
In a privacy context, instead of imposing that the result of the mechanism is roughly the same on neighboring datasets and , it is possible to impose that the result of the mechanism on can be simulated using only some information about . The corresponding definition, called zeroknowledge privacy and introduced in [114], captures the idea that the mechanism does not leak more information on a given target than a certain class of aggregate metrics (called model of aggregate information).
Definition 14 (zeroknowledge privacy [114]).
Let Agg be a family of (possibly randomized) algorithms agg. A privacy mechanism is zeroknowledge private if there exists an algorithm and a simulator Sim such as for all pairs of neighboring datasets and , .
Ixa Multidimensional definitions
Using a simulator allows making statements of the type “this mechanism does not leak more information on a given target than a certain class of aggregate metrics”. Similarly to pufferfish privacy, we can vary the neighborhood definitions (to protect other types of information than the presence and characteristics of individuals), and explicitly limit the attacker’s background knowledge using a probability distribution. This is done in [69] as coupledworlds privacy, a generalization of distributional privacy, where a family of functions priv represents the protected attribute.
Definition 15 (coupledworlds privacy [69]).
Let be a family of pairs of functions . A mechanism satisfies coupledworlds privacy if there is a simulator Sim such that for all distributions , all , and all possible values :
This definition is a good example of the possibility of combining variants from different dimensions: it includes variants from N, B and R, and it can be further combined with Q by using indistinguishability and D by a Bayesian reformulation. This is done explicitly in inferencebased coupledworlds privacy [69].
X Computational Power (C)
The indistinguishability property in DP is informationtheoretic
: the attacker is implicitly assumed to have infinite computing power. This is unrealistic in practice, so it is natural to consider definitions where the attacker only has polynomial computing power. Changing this assumption leads to weaker privacy definitions. Two approaches have been proposed to formalize this idea: either modeling the distinguisher explicitly as a polynomial Turing machine or allow a mechanism not to be DP, as long as one cannot distinguish it from a truly DP one. In
[115], the authors introduced both options.The definition modeling the attacker explicitly as a Turing machine is indistinguishabilitybased computational DP. One instantiation of this is outputconstrained DP, presented in [116]: the definition is adapted to a twoparty computation setting, where each party has their own set of privacy parameters.
Definition 16 (IndCDP [115]).
A family of privacy mechanisms provides IndCDP if there exists a negligible function neg such that for all nonuniform probabilistic polynomialtime Turing machines (the distinguisher), all polynomials , all sufficiently large , and all datasets of size at most that differ only one one record, we have:
where neg is a function that converges to zero asymptotically faster than the reciprocal of any polynomial.
The definition requiring the mechanism to “look like” a DP mechanism to a computationally bounded distinguisher is simulationbased computational DP.
Xa Multidimensional Definitions
Some DP variants which explicitly model an adversary with a simulator can relatively easily be adapted to model a computationally bounded adversary, simply by imposing that the simulator must be polynomial. This is done explicitly in [114], where the authors define computational zeroknowledge privacy, which could also be adapted to e.g. the two coupledworlds privacy definitions.
Modeling a computationally bounded adversary is orthogonal to changing the type of input data, as well as considering an adversary with limited background knowledge: in [117], authors define differential indistinguishability, which prevents a polynomial adversary from distinguishing between two Turing machines with random input.
Xi Summary
In Sections IV, V, VI, VII, VIII, IX and X we categorized and listed most DP variants and extension proposed over the past 14 years. In this section, we present properties of privacy definitions that are typically considered desirable. Then, for each definition, we note whether it satisfies said properties, and compare it with other notions. For this purpose, through this section we will use the notion and as tuples, encoding multiple parameters.
Two important properties of a privacy notion are called privacy axioms. They were proposed in [96, 13]. These axioms are consistency checks: properties that, if not satisfied by a privacy definition, indicate a flaw in the definition.
Definition 17 (Privacy axioms [96, 13]).
The two privacy axioms are as follows.

Postprocessing (or Transformation Invariance): If a privacy definition satisfies the postprocessing axiom, then if a mechanism satisfies , the mechanism also satisfies for any function .

Convexity (or Privacy axiom of choice): If a privacy definition satisfies the convexity axiom, then if two mechanisms and satisfy , the mechanism defined by with fixed probability and with probability also satisfies .
The third property often studied for new DP notions, is composability. It guarantees that the output of two mechanisms satisfying a privacy definition stay private, typically with a change in parameters. There are several types of composition: parallel composition (where the mechanisms are applied to disjoint subsets of a larger dataset), sequential composition (where the mechanisms are applied on the entire dataset), and adaptive composition (where each mechanism has access to the entire dataset and the output of the previous mechanisms). In the following, we only consider the sequential composition.
Definition 18 (Composability).
If a privacy definition with parameter is composable, then if two mechanisms and satisfy respectively  and , the mechanism defined by satisfies  for some (nontrivial) .
Xia Relations
When learning about a new privacy notion, it is often useful to know what are the known relations between this notion and other definitions. However, definitions have parameters that often have different meanings, and whose value is not directly comparable. To claim that a definition is stronger than another, we use and adopt a concept of ordering established in [26].
Definition 19 (Relative strength of privacy definitions [26]).
Let and be privacy definitions with respective parameters and . We say that is stronger than (or that is weaker than ), and denote it , if:

for all , there is a such that  implies ;

for all , there is an such that  implies .
If is both stronger than and weaker than , we say that the two definitions are equivalent, and denote it .
Relative strength implies a partial order on the space of possible definitions. It is useful to classify variants but does not capture extensions well. Thus, we introduce a second notion to represent when a definition can be seen as a special case of another.
Definition 20 (Extensions).
Let and be privacy definitions with respective parameters and . We say that is extended by , and denote is as , if for all , there is a value of such that  is the same as ^{5}^{5}5By “the same”, we mean that a mechanism is  iff it is ..
Note, that these relations are transitive, i.e., if and than and if and than .
For brevity, we combine the two previous concepts in a single notation: if and (resp. ), we say that is a stronger (resp. weaker) extension of , and denote it (resp. ).
A summary of the main DP variants and extensions is presented in Table IV. Each definition is associated with the dimensions it belongs. We also specify whether it satisfies the privacy axioms and whether it is composable (yes:, no:✗, unknown:?), providing a reference or a novel proof for each property. Finally, we indicate known relations with other definitions; these are always either explained in the corresponding section or proven in the original reference of the definition.
Xii Related work
In this section, we mention existing surveys in the field of data privacy, as well as variants and extensions which we did not include in our work.
In [118], the authors detail the possible interpretations of DP, and established two views: associative and causal. In [119] these views are further developed and the relationship between privacy and nondiscrimination is studied.
Xiia Surveys
Some of the earliest surveys focusing on DP were written by Dwork [35, 120], and summarize algorithms achieving DP and applications. The more detailed privacy book [18] presents an indepth discussion about the meaning of DP, fundamental techniques for achieving it, and applications of these techniques concerning queryrelease mechanisms and other models such as distributed datasets and computations on data streams.
In [121], the authors classify different privacy enhancing technologies (PETs) into 7 complementary dimensions. Indistinguishability falls into the Aim dimension; however, within this category only anonymity and oblivious transfer are considered.
In [122], the authors survey privacy concerns, measurements and techniques used in the field of online social networks and recommender systems. They classify privacy into 5 categories; DP falls into Privacypreserving models.
In [123], the authors classify 80+ privacy metrics into 8 categories based on the output of the privacy mechanism. One of their classes is Indistinguishability, which contains DP as well as several variants. Some variants are classified into other categories; for example Rényi DP is classified into Uncertainty and mutualinformation DP into Information gain/loss. The authors list 8 different DP variants; our taxonomy can be seen as an extension of the contents of their work (and in particular of the Indistinguishability category).
XiiB Out of scope definitions
We considered certain DPrelated privacy definitions to be out of scope for our work.
XiiB1 Varying the context in which to apply DP
Within this paper we focus on DP variants/extensions typically used in the global setting, in which a central entity has access to the entire dataset. It is also possible to use DP in other contexts, without formally changing the definition. Several options are listed below.
Local DP [124] protects each user’s data from a central aggregator. This idea was proposed in [125] as distributed DP, where authors additionally assume that only a portion of participants is honest.
Joint DP [126] model a game in which each player cannot learn the data from any other player. In multiparty DP [127], the view of each subgroup of players is differentially private with respect to other players inputs.
DP in the shuffled model [128] falls inbetween the global and the local model.
Some variants introduced in this work were also considered in the local setting: localized information privacy [129] (local version of information privacy), restricted local DP [130] (local version of onesided DP), personalized local DP [131] (local version of personalized DP), and local DP [132] (local version of privacy).
XiiB2 Syntactic definitions
Besides the syntactic definitions mentioned in the introduction, some definitions do not provide a clear privacy guarantee or are only used as a tool in order to prove links between existing definitions. As such, we did not include them in our survey.
Examples include privacy [133] (the first attempt at formalizing an adversary with restricted background knowledge, whose formulation did not have the same interpretation than noiseless privacy), differential identifiability [104] (bounds the probability that a given individual’s information is included in the input datasets, but does not measure the change in probabilities between the two alternatives), crowdblending privacy [134] (combines DP with anonymity), sampling DP [105, 106] (requires that the mechanism verifies DP after an initial random sampling of the database) and anonymity [135] (performs anonymisation on a subset of the quasi identifiers and then DP on the remaining quasiidentifiers with different settings for each equivalence class).
XiiB3 Variants of sensitivity
A crucial technical tool, used when designing DP mechanisms, is the sensitivity of the function what the mechanism protects. There are many variants to the initial concept of global sensitivity [16], including local sensitivity [136], smooth sensitivity [136], restricted sensitivity [137], empirical sensitivity [138], recommendationaware sensitivity [139], record and correlated sensitivity [140], dependence sensitivity [37], perinstance sensitivity [45], individual sensitivity [141], elastic sensitivity [142] and derivative sensitivity [143]. We did not consider these notions as these do not modify the actual definition. We list the corresponding definitions in the full version of this work.
Xiii Conclusion
We classified differential privacy variants and extensions into 7 categories using the concept of dimensions. When possible, we compared definitions from the same dimension, and we showed that definitions from the different dimensions could be combined to form new, meaningful definitions. In theory, it means that even if there were only three possible way to change a dimension (original, weaker, stronger), this would result in possible definitions. Hence, our survey, with its 100+ different definitions, only scratches the surface of the space of possible notions.
Besides our dimensions, we unified and simplified the different notions proposed in the literature. We highlight their properties such as composability and whether they satisfy the privacy axioms by either collecting the existing results or creating new proofs. Additionally, we show their relations to one another.
listname=
See Proposition 1.See Proposition 2.See Proposition 3.See Proposition 4.See Proposition 5.See Proposition 6.See Proposition 7.See Proposition 8.See Proposition 9.See Proposition 10.See Proposition 11.See Proposition 12.See Proposition 13.See Proposition 14.See Proposition 15.PostprocessingCompositionConvexity Abbreviations used for dimensions:

Q: Quantification of privacy loss

N: Neighborhood definition

V: Variation of privacy loss

B: Background knowledge

D: Definition of privacy loss

R: Relativization of knowledge gain

C: Computational power
A proof of a less generic result (for a certain family of functions Agg) appears in [114].This claim appears in [27], although the proof is said to appear in the full version, not published yet.
plus 1fill
Name & references  DimensionsXIII  Axioms  Cp.XIII  Relations  

P.P.XIII  Cv.XIII  
approximated DP [20]  Q  XIII  XIII  XIII  DP DP 
Probabilistic DP [21, 22, 23]  Q  ✗XIII  ✗XIII  XIII  DP Pro DP DP 
KullbackLeiber Pr [25, 26]  Q  XIII  XIII  XIII  DP KL Pr DP 
Rényi DP [27]  Q  XIII  XIII  XIII  KL Pr Rényi DP DP 
mutualinformation DP [26]  Q  XIII  XIII  XIII  KL Pr MI DP DP 
mean Concentrated DP [30]  Q  ✗XIII  ?  XIII  DP mCo DP DP 
zero Concentrated DP [31]  Q  XIII  XIII  XIII  zCo DP mCo DP 
approximate CoDP [31]  Q  ✗XIII  ?  XIII  DP ACo DP zCo DP 
bounded CoDP [31]  Q  XIII  XIII  XIII  bCo DP zCo DP 
truncated CoDP [32]  Q  XIII  XIII  XIII  tCo DP bCo DP 
divergence DP [25]  Q  XIII  XIII  ?  Div DP DP 
group DP [35]  N  XIII  XIII  XIII  Gr DP DP 
free lunch Pr [34]  N  XIII  XIII  XIII  Gr DP FL Pr 
unbounded DP [34]  N  XIII  XIII  XIII  Gr DP uBo DP DP 
bounded/attribute/bit DP [34]  N  XIII  XIII  XIII  DP Bo DP Att DP Bit DP 
DP under correlation [36]  N  XIII  XIII  XIII  DPuC DP 
dependent DP [37]  N  XIII  XIII  XIII  Dep DP DPuC 
onesided DP [41]  N  XIII  XIII  XIII  OnS DP DP 
individual DP [44]  N  XIII  XIII  XIII  Ind DP DP 
perinstance DP [45]  N  XIII  XIII  XIII  PI DP Ind DP 
generic DP [34]  N  XIII  XIII  XIII  Gc DP DP 
blowfish Pr [70, 71]  N  XIII  XIII  XIII  BF Pr DP 
personalized DP [76, 77, 78, 79]  V  XIII  XIII  XIII  Per DP DP 
tailored DP [81]  V  XIII  XIII  XIII  Tai DP Per DP 
outlier DP [81]  V  XIII  XIII  XIII  Out DP Tai DP 
random DP [83]  V  XIII  ✗XIII  XIII  Ran DP DP 
Pr [86]  N,V  XIII  XIII  XIII  Pr DP 
distributional Pr [92]  N,V  ?  ?  ?  Dist Pr DP 
endogenous DP [93]  Q,V  XIII  XIII  XIII  DP End DP Per DP 
typical Pr [94]  Q,V  XIII  ✗XIII  XIII  DP Typ Pr Ran DP 
on average KL Pr [95]  Q,V  ?  ?  ?  KL Pr avgKL Pr Ran DP 
extended divergent DP [74]  Q,N,V  XIII  XIII  ?  Pr EDiv DP Div DP 
general DP [96]  Q,N,V  XIII  XIII  ?  Gl DP DP 
noiseless Pr [97, 98]  B  XIII  XIII  ✗XIII  N Pr DP 
distributional DP [69]  B  XIII  XIII  ✗XIII  Dist DP DP 
active PK DP [98, 99]  Q,B  XIII  XIII  ✗XIII  APK DP N Pr 
passive PK DP [99]  Q,B  ✗XIII  APK DP PPK DP DP  
pufferfish Pr [68]  N,B  XIII  XIII  ✗XIII  Gc DP PF Pr N Pr 
distribution Pr [101, 74]  N,B  XIII  XIII  ✗XIII  Dist Pr PF Pr 
extended DPr [74]  N,V,B  XIII  XIII  ✗XIII  Pr EDist Pr Dist Pr 
divergent DPr [74]  Q,N,B  XIII  XIII  ✗XIII  DP DivDist Pr Dist Pr 
ext. div. DPr [74]  Q,N,V,B  XIII  XIII  ✗XIII  DivDist Pr EDivDist Pr EDist Pr 
positive membership Pr [103]  D  XIII  XIII  ✗XIII  PM Pr DP 
adversarial Pr [107]  D  XIII  XIII  ✗XIII  Adv Pr DP 
semantic Pr [109, 108]  D  ?  ?  ?  Sem Pr DP 
aposteriori noiseless Pr [98]  B,D  XIII  XIII  ?  AN Pr N Pr 
inferencebased dist. DP [69]  B,D  ?  ?  ?  IBD DP Dist DP 
information Pr [112]  N,D  XIII  XIII  AN Pr Inf Pr FL Pr  
zeroknowledge Pr [114]  R  XIII  XIII  ?XIII  ZK Pr DP 
coupledworlds Pr [69]  N,B,R  XIII  XIII  ✗XIII  CW Pr Dist DP 
inferencebased CW Pr [69]  Q,N,B,D,R  ?  ?  ✗XIII  IBCW Pr CW Pr 
SIMcomputational DP [115]  C  XIII  XIII  XIII  Sim CDP DP 
INDcomputational DP [115]  C  XIII  XIII  XIII  Ind CDP Sim CDP 
computational ZK Pr [114]  R,C  XIII  XIII  ?  CZK Pr ZK Pr 
References
 [1] P. Samarati and L. Sweeney, “Protecting privacy when disclosing information: kanonymity and its enforcement through generalization and suppression,” technical report, SRI International, Tech. Rep., 1998.
 [2] L. Sweeney, “kanonymity: A model for protecting privacy,” International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems, vol. 10, no. 05, pp. 557–570, 2002.
 [3] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam, “ldiversity: Privacy beyond kanonymity,” in Data Engineering, 2006. ICDE’06. Proceedings of the 22nd International Conference on. IEEE, 2006, pp. 24–24.
 [4] N. Li, T. Li, and S. Venkatasubramanian, “tcloseness: Privacy beyond kanonymity and ldiversity,” in Data Engineering, 2007. ICDE 2007. IEEE 23rd International Conference on. IEEE, 2007, pp. 106–115.
 [5] K. Stokes and V. Torra, “nconfusion: a generalization of kanonymity,” in Proceedings of the 2012 Joint EDBT/ICDT Workshops. ACM, 2012, pp. 211–215.
 [6] C. Dwork and F. McSherry, “Differential data privacy,” U.S. Patent US7 698 250B2, 2005.
 [7] C. Dwork, “Differential privacy,” in Proceedings of the 33rd international conference on Automata, Languages and Programming. ACM, 2006, pp. 1–12.
 [8] T. Dalenius, “Towards a methodology for statistical disclosure control,” statistik Tidskrift, vol. 15, no. 429444, pp. 2–1, 1977.
 [9] G. Shafi and S. Micali, “Probabilistic encryption,” Journal of computer and system sciences, vol. 28, no. 2, pp. 270–299, 1984.
 [10] Ú. Erlingsson, V. Pihur, and A. Korolova, “RAPPOR: Randomized aggregatable privacypreserving ordinal response,” in Proceedings of the 2014 ACM SIGSAC conference on computer and communications security. ACM, 2014, pp. 1054–1067.
 [11] D. P. Team, “Learning with privacy at scale.” Apple.
 [12] B. Ding, J. Kulkarni, and S. Yekhanin, “Collecting telemetry data privately,” in Advances in Neural Information Processing Systems, 2017, pp. 3571–3580.
 [13] D. Kifer and B.R. Lin, “An axiomatic view of statistical privacy and utility,” Journal of Privacy and Confidentiality, vol. 4, no. 1, pp. 5–49, 2012.
 [14] S. L. Warner, “Randomized response: A survey technique for eliminating evasive answer bias,” Journal of the American Statistical Association, vol. 60, no. 309, pp. 63–69, 1965.
 [15] A. Evfimievski, J. Gehrke, and R. Srikant, “Limiting privacy breaches in privacy preserving data mining,” in Proceedings of the twentysecond ACM SIGMODSIGACTSIGART symposium on Principles of database systems. ACM, 2003, pp. 211–222.
 [16] C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” in Theory of Cryptography Conference. Springer, 2006, pp. 265–284.
 [17] K. Chaudhuri and N. Mishra, “When random sampling preserves privacy,” in Annual International Cryptology Conference. Springer, 2006, pp. 198–213.
 [18] C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014.
 [19] I. Dinur and K. Nissim, “Revealing information while preserving privacy,” in Proceedings of the twentysecond ACM SIGMODSIGACTSIGART symposium on Principles of database systems. ACM, 2003, pp. 202–210.
 [20] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, “Our data, ourselves: Privacy via distributed noise generation.” in Eurocrypt, vol. 4004. Springer, 2006, pp. 486–503.
 [21] A. Machanavajjhala, D. Kifer, J. Abowd, J. Gehrke, and L. Vilhuber, “Privacy: Theory meets practice on the map,” in Proceedings of the 2008 IEEE 24th International Conference on Data Engineering. IEEE Computer Society, 2008, pp. 277–286.
 [22] S. Canard and B. Olivier, “Differential privacy in distribution and instancebased noise mechanisms.” IACR Cryptology ePrint Archive, vol. 2015, p. 701, 2015.
 [23] S. Meiser, “Approximate and probabilistic differential privacy definitions,” Cryptology ePrint Archive, Report 2018/277, 2018.
 [24] Z. Zhang, Z. Qin, L. Zhu, W. Jiang, C. Xu, and K. Ren, “Toward practical differential privacy in smart grid with capacitylimited rechargeable batteries,” 2015.
 [25] R. F. Barber and J. C. Duchi, “Privacy and statistical risk: Formalisms and minimax bounds,” arXiv preprint arXiv:1412.4451, 2014.
 [26] P. Cuff and L. Yu, “Differential privacy as a mutual information constraint,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 43–54.
 [27] I. Mironov, “Renyi differential privacy,” in Computer Security Foundations Symposium (CSF), 2017 IEEE 30th. IEEE, 2017, pp. 263–275.
 [28] Y.X. Wang, B. Balle, and S. Kasiviswanathan, “Subsampled rényi differential privacy and analytical moments accountant,” arXiv preprint arXiv:1808.00087, 2018.
 [29] L. Colisson, “L3 internship report: Quantum analog of differential privacy in term of rényi divergence.” 2016.
 [30] C. Dwork and G. N. Rothblum, “Concentrated differential privacy,” arXiv preprint arXiv:1603.01887, 2016.
 [31] M. Bun and T. Steinke, “Concentrated differential privacy: Simplifications, extensions, and lower bounds,” in Theory of Cryptography Conference. Springer, 2016, pp. 635–658.

[32]
M. Bun, C. Dwork, G. N. Rothblum, and T. Steinke, “Composable and versatile
privacy via truncated cdp,” in
Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing
. ACM, 2018, pp. 74–86. 
[33]
D. Sommer, S. Meiser, and E. Mohammadi, “Privacy loss classes: The central limit theorem in differential privacy,” 2018.
 [34] D. Kifer and A. Machanavajjhala, “No free lunch in data privacy,” in Proceedings of the 2011 ACM SIGMOD International Conference on Management of data. ACM, 2011, pp. 193–204.
 [35] C. Dwork, “Differential privacy: A survey of results,” in International Conference on Theory and Applications of Models of Computation. Springer, 2008, pp. 1–19.
 [36] R. Chen, B. C. Fung, P. S. Yu, and B. C. Desai, “Correlated network data publication via differential privacy,” The VLDB Journal—The International Journal on Very Large Data Bases, vol. 23, no. 4, pp. 653–676, 2014.
 [37] C. Liu, S. Chakraborty, and P. Mittal, “Dependence makes you vulnberable: Differential privacy under dependent tuples.” in NDSS, vol. 16, 2016, pp. 21–24.

[38]
X. Wu, W. Dou, and Q. Ni, “Game theory based privacy preserving analysis in correlated data publication,” in
Proceedings of the Australasian Computer Science Week Multiconference. ACM, 2017, p. 73.  [39] X. Wu, T. Wu, M. Khan, Q. Ni, and W. Dou, “Game theory based correlated privacy preserving analysis in big data,” IEEE Transactions on Big Data, 2017.
 [40] B. Yang, I. Sato, and H. Nakagawa, “Bayesian differential privacy on correlated data,” in Proceedings of the 2015 ACM SIGMOD international conference on Management of Data. ACM, 2015, pp. 747–762.
 [41] S. Doudalis, I. Kotsogiannis, S. Haney, A. Machanavajjhala, and S. Mehrotra, “Onesided differential privacy,” arXiv preprint arXiv:1712.05888, 2017.
 [42] M. Kearns, A. Roth, Z. S. Wu, and G. Yaroslavtsev, “Private algorithms for the protected in social network search,” Proceedings of the National Academy of Sciences, vol. 113, no. 4, pp. 913–918, 2016.

[43]
D. M. Bittner, A. D. Sarwate, and R. N. Wright, “Using noisy binary search for differentially private anomaly detection,” in
International Symposium on Cyber Security Cryptography and Machine Learning
. Springer, 2018, pp. 20–37.  [44] J. SoriaComas, J. DomingoFerrer, D. Sánchez, and D. Megías, “Individual differential privacy: A utilitypreserving formulation of differential privacy guarantees,” IEEE Transactions on Information Forensics and Security, vol. 12, no. 6, pp. 1418–1429, 2017.
 [45] Y.X. Wang, “Perinstance differential privacy and the adaptivity of posterior sampling in linear and ridge regression,” arXiv preprint arXiv:1707.07708, 2017.

[46]
M. Hay, C. Li, G. Miklau, and D. Jensen, “Accurate estimation of the degree distribution of private networks,” in
Data Mining, 2009. ICDM’09. Ninth IEEE International Conference on. IEEE, 2009, pp. 169–178.  [47] C. Task and C. Clifton, “A guide to differential privacy theory in social network analysis,” in Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012). IEEE Computer Society, 2012, pp. 411–417.
 [48] X. Ding, W. Wang, M. Wan, and M. Gu, “Seamless privacy: Privacypreserving subgraph counting in interactive social network analysis,” in CyberEnabled Distributed Computing and Knowledge Discovery (CyberC), 2013 International Conference on. IEEE, 2013, pp. 97–104.
 [49] A. Sealfon, “Shortest paths and distances with differential privacy,” in Proceedings of the 35th ACM SIGMODSIGACTSIGAI Symposium on Principles of Database Systems. ACM, 2016, pp. 29–41.
 [50] J. Reuben, “Towards a differential privacy theory for edgelabeled directed graphs,” SICHERHEIT 2018, 2018.
 [51] R. Pinot, “Minimum spanning tree release under differential privacy constraints,” arXiv preprint arXiv:1801.06423, 2018.
 [52] C. Dwork, M. Naor, T. Pitassi, and G. N. Rothblum, “Differential privacy under continual observation,” in Proceedings of the fortysecond ACM symposium on Theory of computing. ACM, 2010, pp. 715–724.
 [53] C. Dwork, M. Naor, T. Pitassi, G. N. Rothblum, and S. Yekhanin, “Panprivate streaming algorithms.” in ICS, 2010, pp. 66–80.
 [54] C. Dwork, “Differential privacy in new settings,” in Proceedings of the twentyfirst annual ACMSIAM symposium on Discrete Algorithms. SIAM, 2010, pp. 174–183.
 [55] G. Kellaris, S. Papadopoulos, X. Xiao, and D. Papadias, “Differentially private event sequences over infinite streams,” Proceedings of the VLDB Endowment, vol. 7, no. 12, pp. 1155–1166, 2014.
 [56] A. Jones, K. Leahy, and M. Hale, “Towards differential privacy for symbolic systems,” arXiv preprint arXiv:1809.08634, 2018.
 [57] J. Zhang, J. Sun, R. Zhang, Y. Zhang, and X. Hu, “Privacypreserving social media data outsourcing,” in IEEE INFOCOM 2018IEEE Conference on Computer Communications. IEEE, 2018, pp. 1106–1114.
 [58] Z. Yan, J. Liu, G. Li, Z. Han, and S. Qiu, “Privmin: Differentially private minhash for jaccard similarity computation,” arXiv preprint arXiv:1705.07258, 2017.
 [59] X. Ying, X. Wu, and Y. Wang, “On linear refinement of differential privacypreserving query answering,” in PacificAsia Conference on Knowledge Discovery and Data Mining. Springer, 2013, pp. 353–364.
 [60] S. Simmons, C. Sahinalp, and B. Berger, “Enabling privacypreserving gwass in heterogeneous human populations,” Cell systems, vol. 3, no. 1, pp. 54–61, 2016.
 [61] R. Guerraoui, A.M. Kermarrec, R. Patra, and M. Taziki, “D 2 p: distancebased differential privacy in recommenders,” Proceedings of the VLDB Endowment, vol. 8, no. 8, pp. 862–873, 2015.
 [62] E. ElSalamouny and S. Gambs, “Differential privacy models for locationbased services,” Transactions on Data Privacy, vol. 9, no. 1, pp. 15–48, 2016.
 [63] G. Kellaris, G. Kollios, K. Nissim, and A. O’Neill, “Accessing data while preserving privacy,” arXiv preprint arXiv:1706.01552, 2017.
 [64] S. Wagh, P. Cuff, and P. Mittal, “Differentially private oblivious ram,” arXiv preprint arXiv:1601.03378, 2016.
 [65] T. H. Chan, K.M. Chung, B. Maggs, and E. Shi, “Foundations of differentially oblivious algorithms,” 2018.
 [66] J. Allen, B. Ding, J. Kulkarni, H. Nori, O. Ohrimenko, and S. Yekhanin, “An algorithmic framework for differentially private data analysis on trusted processors,” arXiv preprint arXiv:1807.00736, 2018.
 [67] R. R. Toledo, G. Danezis, and I. Goldberg, “Lowercost eprivate information retrieval,” Proceedings on Privacy Enhancing Technologies, vol. 2016, no. 4, pp. 184–201, 2016.
 [68] D. Kifer and A. Machanavajjhala, “A rigorous and customizable framework for privacy,” in Proceedings of the 31st ACM SIGMODSIGACTSIGAI symposium on Principles of Database Systems. ACM, 2012, pp. 77–88.
 [69] R. Bassily, A. Groce, J. Katz, and A. Smith, “Coupledworlds privacy: Exploiting adversarial uncertainty in statistical data privacy,” in Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. IEEE, 2013, pp. 439–448.
 [70] X. He, A. Machanavajjhala, and B. Ding, “Blowfish privacy: Tuning privacyutility tradeoffs using policies,” in Proceedings of the 2014 ACM SIGMOD international conference on Management of data. ACM, 2014, pp. 1447–1458.
 [71] S. Haney, A. Machanavajjhala, and B. Ding, “Design of policyaware differentially private algorithms,” Proceedings of the VLDB Endowment, vol. 9, no. 4, pp. 264–275, 2015.
 [72] C. Fang and E.C. Chang, “Differential privacy with deltaneighbourhood for spatial and dynamic datasets,” in Proceedings of the 9th ACM symposium on Information, computer and communications security. ACM, 2014, pp. 159–170.
 [73] B. I. Rubinstein and F. Alda, “Painfree random differential privacy with sensitivity sampling,” arXiv preprint arXiv:1706.02562, 2017.
 [74] Y. Kawamoto and T. Murakami, “Differentially private obfuscation mechanisms for hiding probability distributions,” arXiv preprint arXiv:1812.00939, 2018.

[75]
N. Niknami, M. Abadi, and F. Deldar, “Spatialpdp: A personalized
differentially private mechanism for range counting queries over spatial
databases,” in
Computer and Knowledge Engineering (ICCKE), 2014 4th International eConference on
. IEEE, 2014, pp. 709–715.  [76] Z. Jorgensen, T. Yu, and G. Cormode, “Conservative or liberal? personalized differential privacy,” in Data Engineering (ICDE), 2015 IEEE 31st International Conference on. IEEE, 2015, pp. 1023–1034.
 [77] H. Ebadi, D. Sands, and G. Schneider, “Differential privacy: Now it’s getting personal,” in Acm Sigplan Notices, vol. 50, no. 1. ACM, 2015, pp. 69–81.
 [78] A. Ghosh and A. Roth, “Selling privacy at auction,” Games and Economic Behavior, vol. 91, pp. 334–346, 2015.
 [79] Z. Liu, Y.X. Wang, and A. Smola, “Fast differentially private matrix factorization,” in Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 2015, pp. 171–178.
 [80] M. Alaggan, S. Gambs, and A.M. Kermarrec, “Heterogeneous differential privacy,” arXiv preprint arXiv:1504.06998, 2015.
 [81] E. Lui and R. Pass, “Outlier privacy,” in Theory of Cryptography Conference. Springer, 2015, pp. 277–305.
 [82] M. E. Andrés, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi, “Geoindistinguishability: Differential privacy for locationbased systems,” in Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security. ACM, 2013, pp. 901–914.
 [83] R. Hall, A. Rinaldo, and L. Wasserman, “Random differential privacy,” arXiv preprint arXiv:1112.2680, 2011.
 [84] R. Hall et al., “New statistical applications for differential privacy,” Ph.D. dissertation, PhD thesis, Carnegie Mellon, 2012.
 [85] D. R. McClure, “Relaxations of differential privacy and risk/utility evaluations of synthetic data and fidelity measures,” Ph.D. dissertation, 2015.
 [86] K. Chatzikokolakis, M. E. Andrés, N. E. Bordenabe, and C. Palamidessi, “Broadening the scope of differential privacy using metrics,” in International Symposium on Privacy Enhancing Technologies Symposium. Springer, 2013, pp. 82–102.
 [87] D. Proserpio, S. Goldberg, and F. McSherry, “Calibrating data to sensitivity in private data analysis: a platform for differentiallyprivate analysis of weighted datasets,” Proceedings of the VLDB Endowment, vol. 7, no. 8, pp. 637–648, 2014.
 [88] N. Fernandes, M. Dras, and A. McIver, “Generalised differential privacy for text document processing,” arXiv preprint arXiv:1811.10256, 2018.
 [89] F. Deldar and M. Abadi, “Pldptd: Personalizedlocation differentially private data analysis on trajectory databases,” Pervasive and Mobile Computing, 2018.
 [90] Y. Xiao and L. Xiong, “Protecting locations with differential privacy under temporal correlations,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 2015, pp. 1298–1309.
 [91] S. Zhou, K. Ligett, and L. Wasserman, “Differential privacy with compression,” in Information Theory, 2009. ISIT 2009. IEEE International Symposium on. IEEE, 2009, pp. 2718–2722.
 [92] A. Roth, “New algorithms for preserving differential privacy,” Microsoft Research, 2010.
 [93] S. Krehbiel, “Markets for database privacy,” 2014.
 [94] R. Bassily and Y. Freund, “Typicalitybased stability and privacy,” arXiv preprint arXiv:1604.03336, 2016.
 [95] Y.X. Wang, J. Lei, and S. E. Fienberg, “Onaverage klprivacy and its equivalence to generalization for maxentropy mechanisms,” in International Conference on Privacy in Statistical Databases. Springer, 2016, pp. 121–134.
 [96] D. Kifer and B.R. Lin, “Towards an axiomatization of statistical privacy and utility,” in Proceedings of the twentyninth ACM SIGMODSIGACTSIGART symposium on Principles of database systems. ACM, 2010, pp. 147–158.
 [97] Y. Duan, “Privacy without noise,” in Proceedings of the 18th ACM conference on Information and knowledge management. ACM, 2009, pp. 1517–1520.
 [98] R. Bhaskar, A. Bhowmick, V. Goyal, S. Laxman, and A. Thakurta, “Noiseless database privacy,” in International Conference on the Theory and Application of Cryptology and Information Security. Springer, 2011, pp. 215–232.
 [99] D. Desfontaines, E. Krahmer, and E. Mohammadi, “Passive and active attackers in noiseless privacy,” arXiv preprint arXiv:1905.00650, 2019.
 [100] S. Leung and E. Lui, “Bayesian mechanism design with efficiency, privacy, and approximate truthfulness,” in International Workshop on Internet and Network Economics. Springer, 2012, pp. 58–71.
 [101] M. Jelasity and K. P. Birman, “Distributional differential privacy for largescale smart metering,” in Proceedings of the 2nd ACM workshop on Information hiding and multimedia security. ACM, 2014, pp. 141–146.
 [102] J. Liu, L. Xiong, and J. Luo, “Semantic security: Privacy definitions revisited.” Trans. Data Privacy, vol. 6, no. 3, pp. 185–198, 2013.
 [103] N. Li, W. Qardaji, D. Su, Y. Wu, and W. Yang, “Membership privacy: a unifying framework for privacy definitions,” in Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security. ACM, 2013, pp. 889–900.
 [104] J. Lee and C. Clifton, “Differential identifiability,” in Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2012, pp. 1041–1049.
 [105] N. Li, W. H. Qardaji, and D. Su, “Provably private data anonymization: Or, kanonymity meets differential privacy,” Arxiv preprint, 2011.
 [106] N. Li, W. Qardaji, and D. Su, “On sampling, anonymization, and differential privacy or, kanonymization meets differential privacy,” in Proceedings of the 7th ACM Symposium on Information, Computer and Communications Security. ACM, 2012, pp. 32–33.
 [107] V. Rastogi, M. Hay, G. Miklau, and D. Suciu, “Relationship privacy: output perturbation for queries with joins,” in Proceedings of the twentyeighth ACM SIGMODSIGACTSIGART symposium on Principles of database systems. ACM, 2009, pp. 107–116.
 [108] S. P. Kasiviswanathan and A. Smith, “On the ’semantics’ of differential privacy: A bayesian formulation,” Journal of Privacy and Confidentiality, vol. 6, no. 1, 2014.
 [109] S. R. Ganta, S. P. Kasiviswanathan, and A. Smith, “Composition attacks and auxiliary information in data privacy,” in Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2008, pp. 265–273.
 [110] W. Wang, L. Ying, and J. Zhang, “On the tradeoff between privacy and distortion in differential privacy,” arXiv preprint arXiv:1402.3757, 2014.
 [111] G. Wu, X. Xia, and Y. He, “Extending differential privacy for treating dependent records via information theory,” arXiv preprint arXiv:1703.07474, 2017.
 [112] F. du Pin Calmon and N. Fawaz, “Privacy against statistical inference,” in Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on. IEEE, 2012, pp. 1401–1408.
 [113] S. Goldwasser, S. Micali, and C. Rackoff, “The knowledge complexity of interactive proof systems,” SIAM Journal on computing, vol. 18, no. 1, pp. 186–208, 1989.
 [114] J. Gehrke, E. Lui, and R. Pass, “Towards privacy for social networks: A zeroknowledge based definition of privacy,” in Theory of Cryptography Conference. Springer, 2011, pp. 432–449.
 [115] I. Mironov, O. Pandey, O. Reingold, and S. Vadhan, “Computational differential privacy,” in Advances in CryptologyCRYPTO 2009. Springer, 2009, pp. 126–142.
 [116] X. He, A. Machanavajjhala, C. Flynn, and D. Srivastava, “Composing differential privacy and secure computation: A case study on scaling private record linkage,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 1389–1406.
 [117] M. Backes, A. Kate, S. Meiser, and T. Ruffing, “Differential indistinguishability for cryptography with (bounded) weak sources,” Grande Region Security and Reliability Day (GRSRD), 2014.
 [118] M. C. Tschantz, S. Sen, and A. Datta, “Differential privacy as a causal property,” arXiv preprint arXiv:1710.05899, 2017.
 [119] A. Datta, S. Sen, and M. C. Tschantz, “Correspondences between privacy and nondiscrimination: Why they should be studied together,” arXiv preprint arXiv:1808.01735, 2018.
 [120] C. Dwork, “The differential privacy frontier,” in Theory of Cryptography Conference. Springer, 2009, pp. 496–502.
 [121] J. Heurix, P. Zimmermann, T. Neubauer, and S. Fenz, “A taxonomy for privacy enhancing technologies,” Computers & Security, vol. 53, pp. 1–17, 2015.
 [122] E. Aghasian, S. Garg, and J. Montgomery, “User’s privacy in recommendation systems applying online social network data, a survey and taxonomy,” arXiv preprint arXiv:1806.07629, 2018.
 [123] I. Wagner and D. Eckhoff, “Technical privacy metrics: a systematic survey,” ACM Computing Surveys (CSUR), vol. 51, no. 3, p. 57, 2018.
 [124] J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Local privacy and statistical minimax rates,” in Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. IEEE, 2013, pp. 429–438.
 [125] E. Shi, H. Chan, E. Rieffel, R. Chow, and D. Song, “Privacypreserving aggregation of timeseries data,” in Annual Network & Distributed System Security Symposium (NDSS). Internet Society., 2011.
 [126] M. Kearns, M. Pai, A. Roth, and J. Ullman, “Mechanism design in large games: Incentives and privacy,” in Proceedings of the 5th conference on Innovations in theoretical computer science. ACM, 2014, pp. 403–410.
 [127] G. Wu, Y. He, J. Wu, and X. Xia, “Inherit differential privacy in distributed setting: Multiparty randomized function computation,” in Trustcom/BigDataSE/I SPA, 2016 IEEE. IEEE, 2016, pp. 921–928.
 [128] A. Cheu, A. Smith, J. Ullman, D. Zeber, and M. Zhilyaev, “Distributed differential privacy via mixnets,” arXiv preprint arXiv:1808.01394, 2018.
 [129] B. Jiang, M. Li, and R. Tandon, “Contextaware data aggregation with localized information privacy,” arXiv preprint arXiv:1804.02149, 2018.
 [130] T. Murakami and Y. Kawamoto, “Restricted local differential privacy for distribution estimation with high data utility,” arXiv preprint arXiv:1807.11317, 2018.
 [131] Y. Nie, W. Yang, L. Huang, X. Xie, Z. Zhao, and S. Wang, “A utilityoptimized framework for personalized private histogram estimation,” IEEE Transactions on Knowledge and Data Engineering, 2018.
 [132] M. S. Alvim, K. Chatzikokolakis, C. Palamidessi, and A. Pazii, “Metricbased local differential privacy for statistical applications,” arXiv preprint arXiv:1805.01456, 2018.
 [133] A. Machanavajjhala, J. Gehrke, and M. Götz, “Data publishing against realistic adversaries,” Proceedings of the VLDB Endowment, vol. 2, no. 1, pp. 790–801, 2009.
 [134] J. Gehrke, M. Hay, E. Lui, and R. Pass, “Crowdblending privacy,” in Advances in Cryptology–CRYPTO 2012. Springer, 2012, pp. 479–496.
 [135] N. Holohan, S. Antonatos, S. Braghin, and P. Mac Aonghusa, “(k,e)anonymity: kanonymity with edifferential privacy,” arXiv preprint arXiv:1710.01615, 2017.
 [136] K. Nissim, S. Raskhodnikova, and A. Smith, “Smooth sensitivity and sampling in private data analysis,” in Proceedings of the thirtyninth annual ACM symposium on Theory of computing. ACM, 2007, pp. 75–84.
 [137] J. Blocki, A. Blum, A. Datta, and O. Sheffet, “Differentially private data analysis of social networks via restricted sensitivity,” in Proceedings of the 4th conference on Innovations in Theoretical Computer Science. ACM, 2013, pp. 87–96.
 [138] S. Chen and S. Zhou, “Recursive mechanism: towards node differential privacy and unrestricted joins,” in Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data. ACM, 2013, pp. 653–664.
 [139] T. Zhu, G. Li, Y. Ren, W. Zhou, and P. Xiong, “Differential privacy for neighborhoodbased collaborative filtering,” in Proceedings of the 2013 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. ACM, 2013, pp. 752–759.
 [140] T. Zhu, P. Xiong, G. Li, and W. Zhou, “Correlated differential privacy: hiding information in noniid data set,” IEEE Transactions on Information Forensics and Security, vol. 10, no. 2, pp. 229–242, 2015.
 [141] R. Cummings and D. Durfee, “Individual sensitivity preprocessing for data privacy,” arXiv preprint arXiv:1804.08645, 2018.
 [142] N. Johnson, J. P. Near, and D. Song, “Towards practical differential privacy for sql queries,” Proceedings of the VLDB Endowment, vol. 11, no. 5, pp. 526–539, 2018.
 [143] P. Laud, A. Pankova, and P. Martin, “Achieving differential privacy using methods from calculus,” arXiv preprint arXiv:1811.06343, 2018.
Comments
There are no comments yet.