1 Introduction
Public use data, which are often released by government and other data collecting agencies, typically need to to satisfy two competing objectives: maintaining relevant statistical properties of the original data and protecting privacy of individuals. To address these two goals, various statistical disclosure limitation techniques have been developed (Willenborg and de Waal, 2001). Some popular disclosure techniques are data swapping (Dalenius and Reiss, 1978; Fienberg and McIntyre, 2005), topcoding, feature generalization such as anonymity (Sweeney, 2002) or diversity (Machanavajjhala et al., 2007), and additive random noise with measurement error models (Fuller, 1993). Each method has distinct utility and risk aspects. In practice, a disclosure limitation technique is carefully chosen by domain experts and statisticians. Sometimes, multiple techniques are mixed and applied to a single dataset to achieve better privacy protection before being released to the public (Centers for Medicare and Medicaid Services, 2013). Such public use datasets have served as valuable information sources for decision makings in economics, healthcare, and business analytics.
The generation of synthetic data, proposed by Rubin (1993), is an alternative approach to datatransforming disclosure techniques. Multiple imputation, which was originally developed to impute missing values in survey responses (Rubin, 1987), is used to generate either partially or fully synthetic data. As synthetic data preserves the structure and resolution of the original data, preprocessing steps and analytical procedures on synthetic data can be effortlessly transferred to the original data. This aspect has contributed to popular adoption of synthetic data in diverse research areas. Thus far, there has been notable progress on valid inferences using synthetic data and extensions to different applications: Abowd and Woodcock (2001) synthesized a French longitudinal linked database, and Raghunathan et al. (2003
) provided general methods for obtaining valid inferences using multiply imputed data. Beyond typically used generalized linear models, decision trees models, such as CART and Random forests, can also be used as imputation models in multiple imputation
(Reiter, 2003; Caiola and Reiter, 2010). Some illustrative empirical studies have used U.S. census data (Drechsler and Reiter, 2010), German business database (Reiter and Drechsler, 2010), and U.S. American Community Survey (Sakshaug and Raghunathan, 2011).A very different approach to imputing missing values in binary or binarized datasets can be taken using association rule mining. Vreenken and Siebes (
2008) used the minimum description length principle to develop a set of heuristics that are used to (approximately) represent the dataset in terms of a concise set of frequent itemsets. These rules can be then used to impute missing values, and in principle could also be used to generate synthetic data. However, their current work does not quantify the privacy afforded by the compressed data or use a given privacy criterion to determine the derived itemsets.
The two competing requirements for public use data similarly apply to synthetic data disclosure. Synthetic data need to be accurate enough to answer relevant statistical queries without revealing private information to third parties. Statistical properties of synthetic data are primarily determined by imputation models (Reiter, 2005a), and models that are too accurate tend to leak private information (Abowd and Vilhuber, 2008).
The balance between accuracy and privacy can also be addressed by using cryptographic privacy measures such as differential privacy (Dwork, 2006). However, several attempts to achieve such strong privacy guarantees have shown to be impractical to implement. For example, Barak et al. (2007
) showed that it is possible to release contingency tables under the differential privacy regime using Fourier transform and additive Laplace noise. However, this proposed release mechanism was later criticized for being too conservative and disrupting statistical properties of the original data
(Yang et al., 2012; Charest, 2012). On the other hand, SoriaCormas and Drechsler (2013) claimed that differential privacy can be a useful privacy measure when disclosing a large size of data with a limited number of variables. For example, differentially private synthetic data have been demonstrated using the Census Bureau’s OnTheMap data that consists of approximately one million records with two variables (Machanavajjhala et al., 2008).In this paper, we propose a practical multidimensional categorical data synthesizer that satisfies differential privacy. The proposed synthesizer can handle multidimensional data that are not practical to be represented as contingency tables. We demonstrate our algorithm using a subset of California Patient Discharge data, and generate multiple synthetic discharge datasets. Although differential privacy is extensively used in our algorithm analyses, we note that differential privacy is one of many descriptive measures for disclosure risks. Differential privacy is a measure for functions, not for data (Fienberg et al., 2010), and this measure can be overly pessimistic for dataspecific applications. Thus, we also evaluate disclosure risks of the proposed algorithms using the population uniqueness of synthetic records (Dale and Elliot, 2001) and indirectmatching probabilistic disclosure risks (Duncan and Lambert, 1986). To measure the statistical similarities between synthetic and the original data, we compare marginal and conditional distributions, and regression coefficients from the synthesized data to those from the original data.
Name  Abbreviated Model Equation  Model Parameters 

Contingency table  nonparametric  
Marginal Bayesian Bootstrap  nonparametric  
Multiple imputation  : model coefficients for glm  
Perturbed Gibbs Sampler (PeGS)  : privacy parameter  
Block PeGS with Reset  : sample block size 
There are two bruteforce methods to generate synthetic categorical data. As statistical properties of categorical data are perfectly captured in contingency tables, in theory, a synthetic sample can be drawn directly from an way full contingency table , where
is the total number of features. For data with a small number of features, this contingency table can be estimated by either direct counting or loglinear models
(Winkler, 2003, 2010). However, this strategy does not scale for multidimensional datasets. As we will see in Section 4, our experiment dataset has 13 features and their possible feature combinations are approximately 2.6 trillion. More importantly, sampling from an exact distribution may reveal too much detail about the original data, thus this is not a privacysafe disclosure method. On the other extreme, one may model the joint distribution as a product of univariate marginal distributions. Although this approach can easily achieve differential privacy
(McClure and Reiter, 2012), the synthetic data loses critical joint distributional information about the original data.The proposed algorithm generates realistic but not real synthetic samples by calibrating a privacy parameter . In addition, the exponentially number of cells in a contingency table is avoided by using multiple imputation and feature hashing as follows:
(1) 
where is the compressed and perturbed conditional distribution of the th feature and
is the total number of features. The joint probability distribution is represented as
conditional distributions. Note that the conditional distribution in Equation (1) is not exact. The full condition is compressed using a hash function and perturbed by a privacy parameter . Ignoring these two additional components i.e. and , if the probability is modeled using generalized linear models, then the proposed algorithm is the same as a multiple imputation algorithm for fully synthetic data. The proposed synthesizer is named as Perturbed Gibbs Sampler (PeGS). This is because the proposed sampling procedure can iterate more than once unlike multiple imputation, and is similar to the Gibbs sampler. Table 1 summarizes synthesizer models that are described in this paper. More details on this list and privacy guarantees for both one iteration and multiple iterations are described in Section 3.The rest of this paper is organized as follows: In Section 2, we cover the basics of multiple imputation and differential privacy. In Section 3, the details of the PeGS algorithms are illustrated, and the privacy guarantees of the proposed algorithms are derived. We demonstrate our algorithms using California Patient Discharge dataset in Section 4. Finally, we discuss the limitation of the proposed methods and future extensions in Section 5.
2 Background
In this section, we overview multiple imputation, differential privacy, and diversity. They are primary building blocks of our synthesizer algorithm. We start by describing the original multiple imputation method for missing values, then illustrate its application to generating fully synthetic data. Next, we visit the definition of differential privacy and some approaches to implement differentially private algorithms. The definition of diversity and its variant definition for synthetic data are illustrated.
2.1 Multiple Imputation
Multiple imputation was originally developed to impute missing values in survey responses (Rubin, 1987), and it was later applied to generate synthetic data. Let us start from the missing value imputation setting. Consider a survey with two variables and , , where some of the responses are missing. Let be a subset of where both and are observed. The unobserved responses are imputed using samples from a posterior model as follows:
Note that the posterior^{1}^{1}1Rubin (1987) uses a different notation , but they mean the same. is modeled using the observed subset, and often obtained using generalized linear models or Bayesian Bootstraping methods (Reiter, 2005a). To generate synthetic data, the process is repeated on the observed responses and :
After this sampling step, a subset of fully synthetic responses is randomly sampled, and disclosed as public use data. Typically, this entire process is repeated independently times to obtain different synthetic datasets.
Raghunathan et al. (2003) showed that valid inferences can be obtained from multiply imputed synthetic data. Let be a function of . For example, may represent the population mean of or the population regression coefficients of on . Let and be the estimate of
and its variance obtained from the
th synthetic dataset. Then, valid inferences on can be obtained as follows:where and . These two quantities and estimate the original and its variance.
2.2 Differential Privacy
Differential privacy (Dwork, 2006) is a mathematical measure of privacy that quantifies disclosure risks of statistical functions. To satisfy differential privacy, the inclusion or exclusion of any particular record in data cannot affect the outcome of functions by much. Specifically, a randomized function provides differential privacy, if it satisfies:
for all possible where and differ by at most one element, and . For a synthetic sample, this definition can be interpreted as follows (McClure and Reiter, 2012):
(2) 
where represents a random sample from synthesizers. In other words, a data synthesizer is differentially private, if the probabilities of generating from and are indistinguishable to the extent of .
Several mechanisms have been developed to achieve differential privacy. For numeric outputs, the most popular technique is to add Laplace noise with mean 0 and scale where is the sensitivity of function . Exponential mechanism (McSherry and Talwar, 2007) is a general differential privacy mechanism that can be applied to nonnumeric outputs. For categorical data, Dirichlet prior can be used as a noise mechanism to achieve differential privacy (Machanavajjhala et al., 2008; McClure and Reiter, 2012).
2.3 diversity
A certain combination of features can identify an individual from an anonymized dataset, even if personal identifiers, such as driver license number and social security number, are removed from a dataset. Such threats are commonly prevented by generalizing or suppressing features; for example, ZIP codes with small population are replaced by corresponding county names (generalization), or can be replaced by * (suppression). Sweeney (2002) proposed a privacy definition for measuring the degree of such feature generalization and suppression, anonymity. To adhere the anonymity principle, each row in a dataset should be indistinguishable with at least other rows.
The definition of anonymity, however, does not include two important aspects of data privacy: feature diversity and attackers’ background knowledge. Machanavajjhala (2007) illustrated two potential threats to a anonymized dataset, then proposed a new privacy criterion, diversity. The definition of diversity states that the diversity of sensitive features should be kept within a block of samples. There are several ways of achieving diversity; in this paper, we use Entropy diversity. A dataset is Entropy diverse if
(3) 
where . This definition originally applies to a dataset with feature generalization or suppression. For a synthetic sample, Park et al. (2013) suggested an analogous definition of diversity: A synthetic dataset is synthetically diverse if a synthetic sample is drawn from a distribution that satisfies diversity.
3 Perturbed Gibbs Sampler
In this section, we propose the Perturbed Gibbs Sampler (PeGS) for categorical synthetic data. We first overview the algorithm, then describe its three main components: feature hashing, statistical building blocks, and noise mechanism. Next, we illustrate how the PeGS algorithm can be efficiently extended to draw a block of random samples. Finally, we show that multiple imputation can be similarly extended to satisfy differential privacy, which will be used as our baseline model in Section 4.
3.1 Algorithm Overview
Perturbed Gibbs Sampler (PeGS) is a categorical data synthesizer that consists of three main steps:

Disintegrate: In this step, the original data is disintegrated into statistical building blocks i.e. where is a suitable hash function. These compressed conditional distributions are estimated by counting the corresponding occurrences in the original data.

Inject Noise: For a specified privacy parameter , the statistical building blocks are modified to satisfy differential privacy or diversity, .

Synthesize: We first pick a random seed from a predefined pool; this can be regarded as a query to our model. The seed sample is transformed to a synthetic sample by iteratively sampling each feature from the statistical building blocks, .
Figure 1 visualizes the overall sequential steps of the PeGS algorithm. Figure 2 illustrates the synthesis step. Three components are essential in the PeGS algorithm: feature hashing, statistical building blocks, and perturbation. The number of possible conditions is exponential with respect to the number of features, Therefore, feature hashing is used to compress the number of the possible conditions . Statistical building blocks are built based on this feature hashing, which are essentially multiple hashtables describing compressed conditional distributions. They serve a key role when we try to sample a block of synthetic examples. Perturbation is required to guarantee the differential privacy. Without perturbation, synthetic samples may reveal too much about the original data.
3.2 Feature Hashing
The hash function in PeGS maps a feature vector to an integer key, where the range of the hash key is much smaller than
(exponential in the number of features). Basically, we want to design a hash function that exhibits good compression while maintaining the statistical properties of data. The motivation is somewhat similar to feature hashing in machine learning, also known as the hashing trick, which has been often used to compress sparse highdimensional feature vectors
(Weinberger et al., 2009). For unstructured data such as natural language texts, Locality Sensitive Hashing(Indyk and Motwani, 1998) and minhashing (Gionis et al., 1999) can be good candidates for the PeGS hash function.In this paper, we use a much simpler approach to compress the feature space. We order a feature vector based on the amount of mutual information with . We divide the feature vector into two parts: the first number of features and the rest as follows:
where . Let be the number of categories for , and . The key space of this simple hash function is upper bounded by .
The compressed conditional distribution
, which is basically a occurrence count hashtable for a given hash key, can now be stored in either memory or disk. There are several advantages of using this compressed conditional distribution over parametric modeling. First, the process of building statistical building blocks does not involve complicated statistical procedures such as parameter estimation and model selection. Second, the resulting statistical building blocks are robust to overfitting. Overfitting may occur when there are not enough samples in a table entry. Hashing reduces the number of table cells and smoothes out the estimated probability vector. Finally, this simple table representation is intuitive, and the process is easily extensible. This aspect is critical in our efficient block sampling scheme, which will be illustrated in Section
3.4.3.3 Perturbed Conditional Distribution
To satisfy the differential privacy, a certain amount of noise should be injected to the compressed conditional distributions. The form of noise may depend on applications and privacy measures. For example, noise can be added to maximize entropy (Polettini, 2003) or to satisfy diversity (Park et al., 2013). In this paper, we use the Dirichlet prior perturbation to smooth out raw count based estimators to satisfy differential privacy and diversity. Specifically, virtual samples are added to each category of the variable , when the conditional distribution is estimated. The amount is a privacy parameter that controls the degrees of differential privacy and diversity. To be more precise, our differentially private perturbation requires a single value of , while our diverse perturbation needs different values for each hashed condition i.e. one needs to index as . This reflects the fact that differential privacy is a property of the random function, while diversity depends on dataset properties, an issue that will be touched upon later on. For analytical simplicity, we assume virtual samples, virtual samples for diversity, are uniformly added to all the categories of the variable (see Equation 4). In practice, different amounts of virtual samples can be added to different categories of the variable ; for example, can be proportional to the corresponding marginal distribution i.e. .
We first derive the probability of sampling from the PeGS algorithm. From a random seed sample (or a query), the probability of synthesizing is factorized as follows:
where and are just null values. For another dataset that differs by at most one element, the probability of sampling can be similarly derived.
For differential privacy (see Equation 2), the ratio between two quantities should satisfy the following relation:
Let us focus on the th component as follows:
(4)  
(5) 
where is the total number of rows that have the same hash key as and is the count of the th category i.e. within the samples. In other words, the probability of sampling the th category is proportional to the number of the original samples that have the th category. The privacy parameter acts as a uniform Dirichlet prior on this raw multinomial count estimate.
The value of depends on the privacy criterion. We study two cases: differential privacy and diversity.
A. Differential Privacy. The two datasets for defining differential privacy and have at most one different row. Let us assume that has one more row than i.e. . Except for the entry with hash key , the other entries of the two hash tables from and are identical; only one entry of the hash table is different. For the different entries of the hash tables, there are two possibilities:
Given , we obtain the upperbound for the th component as follows:
where the first inequality is because the two datasets only differ by at most one element. The second inequality comes from the fact that and that the equation is maximized when . Therefore, we obtain the relation between and as follows:
Rearranging the terms, we have:
(6) 
Note that for univariate binary synthetic data, (McClure and Reiter, 2012) showed the relationship between and as . Equation (6) says that a higher level of privacy (low ) needs a high value of . Intuitively, high values of mean stronger priors, thus the synthetic data are more strongly masked by the priors (or virtual samples).
B. Diversity. For diversity (See Equation 3), perturbed conditional distributions need to satisfy the synthetic diversity criterion:
where is the Shannon entropy of the perturbed distribution, . The entropy is a monotonically increasing function with respect to . To satisfy the synthetic diversity criterion with minimal perturbation, we set as follows:
where is set to zero when already satisfies the diversity criterion. Unlike the single for differential privacy, the values for diversity vary depending on conditional distributions. This is because diversity applies to a dataset, whereas differential privacy applies to a function. diversity is dataaware, but may not provide rigorous guarantees for privacy. This is also noted in (Clifton and Tassa, 2013) who observed that syntactic methods such as anonymity and diversity are designed for privacypreserving data publishing, while differential privacy is typically applicable for privacypreserving data mining. Thus these two approaches are not directly competing, and indeed can be used sidebyside. Clifton and Tassa (2013) also provides a detailed assessment of both the limitations and promise of both types of approaches.
3.4 Removing Sampling Footprints
This section illustrates an effective block sampling extension of PeGS, and is specific to differential privacy. PeGS generates one synthetic sample for one seed sample. In other words, one synthetic sample costs in the differential privacy regime. We modify the PeGS algorithm to sample a block of samples from one seed sample, while achieving the same differential privacy. One sampling iteration of PeGS is now repeated many times, but each time, the visited conditional distributions are reset. The procedure of Block PeGS with Reset (PeGS.rs) is as follows:

Pick a random seed from a predefined pool.

For in :

Sample using PeGS seeded by the previous sample , where

Reset all visited conditional distributions

This algorithm produces a block of synthetic samples with the same privacy cost . Figure 3 illustrates the process of PeGS with Reset.
To analyze the privacy aspect of this modified PeGS algorithm, we first need to calculate the probability of synthesizing a block of samples:
where is the transition probability from to . Note that and are different conditional distributions, as components of are reset to the initial states. The ratio between two probabilities is written as follows:
Recall that the statistical building blocks from both datasets differ at most components, as the two datasets differ at most one element. We provide a sketch of the proof that this algorithm satisfies differential privacy as follows:

To generate the same block of samples, the sequences of statistical building blocks need to be the same as well. In other words, as the two samples, and , are the same, and will also be the same. Thus, they use the building blocks from the same location for sampling at the th iteration, and .

There are at most different components between and , and let be the set of different components. This is because and differ by at most one row.

If touched components in , then privacy cost is spent in the process (see Section 3.3).

If touched components in , then the rest of the sequences can differ at most components. This is because those components are reset to uniform distributions, and they became indistinguishable i.e. the visited components from and became the same uniform distribution. Every visit of an element in decreases the number of different elements.

Therefore, the whole sequence can differ at most components (upperbound), thus the proposed block sampling algorithm satisfies the same differential privacy for generating a block of samples.
As we have more samples for the same cost, the privacy cost per sample can be written as:
where . As can be seen, the privacy cost is smaller by a factor of . However, the block size cannot be arbitrarily large. As every visited statistical building block is reset, the synthetic samples tend to be more noisy as we increase the size of the block. This property will be illustrated using a real dataset in Section 4.
3.5 Perturbed Multiple Imputation
The Dirichlet perturbation similarly can be applied to multiple imputation. Perturbed Multiple Imputation is a naive extension of multiple imputation that satisfies differential privacy. A multiple imputation with generalized linear models can be written as follows:
where is the estimated response probability of using a generalized linear model. We assume that the response is a normalized probability measure, thus . We propose perturbed multiple imputation as follows:
Perturbed multiple imputation satisfies differential privacy, if the output is perturbed as
where . The proof is analogous to the proof for the PeGS algorithm. With , this algorithm is the same as a multiple imputation with generalized linear model.
4 Empirical Study
In this section, we evaluate the PeGS algorithm using a real dataset from two perspectives: utility and risk of the PeGSsynthesized data. The utility is measured by comparing marginal, conditional distributions and regression coefficients with those from the original data. The risk is first measured by the differential privacy parameter . As the differential privacy parameter can be too conservative for a real dataset, we also measure population uniqueness and indirect probabilistic disclosure risks. The presented experiments are mainly for the differentially private perturbation, and the experiment with the diversity perturbation can be found in (Park et al., 2013).
4.1 Dataset Overview
We use public Patient Discharge Data from California Office of Statewide Health Planning and Development^{2}^{2}2http://www.oshpd.ca.gov/HID/Data_Request_Center/Manuals_Guides.html. This dataset contains inpatient, emergency care, and ambulatory surgery data collected from licensed California hospitals. Each row of the data represents either one discharge event of a patient or one outpatient encounter. The data are already processed with several disclosure limitation techniques. Feature generalization and masking rules are applied to the data based on population uniqueness.
For our experiment, we use 2011 Los Angeles data. Although there are almost 40 variables in the provided data, we use 13 important variables. The selected variables are listed in Table 2
. For the numeric variables such as age and charge, we transformed the variables into categorical variables by grouping. We subset the data to focus on populous zip code areas. This is to prevent any possible privacy infringement from our experiment. We use this preprocessed dataset to be our groundtruth original data. As can be seen, the possible combinations of the categories are approximately 2 trillion:
. A table of this size cannot be stored in a personal computer.Diagnostic and procedural codes are not included in this experiment. In the original data, diagnoses and procedures are coded following the rules of International Classification of Diseases (ICD9). Both codes can specify very fine levels of diagnoses and procedures; for example, the ICD9 codes include information about a underlying disease and a manifestation in a particular organ. These diagnostic and procedural codes can be grouped into a smaller number of categories. Major Diagnostic Categories (MDC) and Medicare Severity DiagnosisRelated Group (MSDRG) are two examples of coarser diagnostic codes. In this example, we only include higher level abstractions of the detailed features. To keep the semantics of the data, we recommend a two step procedure: first generating a higher level feature, then synthesizing detailed features based on the higher level feature.
Three numeric variables, age, lengthofstay (los), and charge, are grouped and transformed into categorical features. The age variable is equipartitioned to have 5 years gap between consecutive categories. The los and charge variables are grouped based on their marginal distributions. For example, almost half of the population stayed less than 10 days in a hospital. Thus, the los variable is grouped to have 1 day gap before 10 days threshold, and 20 days gap after 10 days. The charge variable exhibited a similar marginal distribution; almost a half of the population pay less than 20K dollars, and we binned this variable to have almost equal sizes of population. The grouping rules are illustrated in Table 2. In Section 5, we will discuss the limitations and extension of treating numeric variables in the PeGS framework.
Variable Name  Description  Category Values 

typ  Type of care  Acute Care, Skilled Nursing, Psychiatric, etc. (6 levels) 
age.yrs  Age of the patient (5 years bin)  0, 5, 10, 15, …, 80, NA (18 levels) 
sex  Gender of the patient  Male, Female, NA (3 levels) 
ethncty  Ethnicity of the patient  Hispanic, NonHispanic, Unknown, NA (4 levels) 
race  Race of the patient  White, Black, Native American, Asian, etc. (7 levels) 
patzip  Patient ZIP code (in LA)  900xx, 902xx, … , 935xx (16 levels) 
los  Length of stay (in days)  0, 1, 2, … , 9, 1030, 3050, 5070, 90+, NA (16 levels) 
disp  The consequent arrangement  Routine, Acute Care, Other Care, etc. (13 levels) 
pay  Payment category  Meicare, MediCal, Private, etc. (9 levels) 
charge  Total hospital charge during the stay  0, 2K, 6K, 8K, 10K, 15K, 20K, …, 100K+ (25 levels) 
MDC  Major diagnostic category  Nervous sys., Eye, ENMT, etc. (25 levels) 
sev  Severity code  0, 1, 2 (3 levels) 
cat  Category code  Medical, Surgical (2 levels) 
4.2 Sampling Demonstration
PeGS transforms each feature one by one conditioned on the rest of the features. This approach differs from a multiple imputation strategy in two aspects. First, PeGS estimates compressed conditional distributions rather than parameterized approximations e.g., generalized linear models. Second, the compressed conditional distributions can be further perturbed by calibrating the privacy parameter, which makes synthetic data differentially private. Table 3 shows how PeGS transforms a random seed into a private synthetic sample. The first row of the table is a random seed, and each consecutive row shows the corresponding sampling step. Note that some features change their values, whereas other features maintain the original values. The final sample is shown in the last row. As can be seen, the final transformed sample is different from the seed; for example, it has a different age, zip code, and disposition code.
sequence  typ  age.yrs  sex  ethncty  race  patzip  los  disp  pay  charge  MDC  sev  cat 

seed  4  55  2  1  1  917xx  8  1  3  40K  25  1  M 
5  55  2  1  1  917xx  8  1  3  40K  25  1  M  
5  75  2  1  1  917xx  8  1  3  40K  25  1  M  
5  75  2  1  1  917xx  8  1  3  40K  25  1  M  
5  75  2  2  1  917xx  8  1  3  40K  25  1  M  
5  75  2  2  1  917xx  8  1  3  40K  25  1  M  
5  75  2  2  1  913xx  8  1  3  40K  25  1  M  
5  75  2  2  1  913xx  9  1  3  40K  25  1  M  
5  75  2  2  1  913xx  9  5  3  40K  25  1  M  
5  75  2  2  1  913xx  9  5  3  40K  25  1  M  
5  75  2  2  1  913xx  9  5  3  65K  25  1  M  
5  75  2  2  1  913xx  9  5  3  65K  7  1  M  
5  75  2  2  1  913xx  9  5  3  65K  7  0  M  
5  75  2  2  1  913xx  9  5  3  65K  7  0  M 
Unlike multiple imputation, PeGS can be iterated many times. However, without the reset option, there is no gain for the privacy cost. The reset option in PeGS.rs removes sampling footsteps, but the synthetic samples after many iterations may not be useful for representing the original data. Figure 4 shows histograms from the generated samples. As can be seen, the block samples from PeGS.rs are more uniformly distributed than those from PeGS. The distributions from PeGS are actually closer to the distribution of the original data than those from PeGS.rs. It is important to note that the PeGS and PeGS.rs in this experiment have different privacy cost; PeGS.rs only used , while PeGS requires . The goal of this experiment is to show the limitation of PeGS.rs. Although PeGS.rs provides more number of samples given the same privacy cost, an arbitrarily large size of block may not be useful in practice.
4.3 Risk () vs. Utility
Reducing disclosure risk and improving data utility are two competing objectives when publishing privacysafe synthetic data. As these two goals cannot be satisfied at the same time, a certain tradeoff is necessary for preparing public use data. This tradeoff has been traditionally represented using a graphical measure, called RU confidentiality map (Duncan et al., 2001). The RU confidentiality map consists of two axis: typically a risk measure on the xaxis and a utility measure on the yaxis. Note that risk and utility measures can be domain and application specific. In this paper, we first show RU maps where the risk is measured using differential privacy. The utility is primarily measured by comparing statistics from the original data and synthetic data.
We use three different algorithms and seven different privacy parameters for each algorithm as follows:

PeGS: Perturbed Gibbs Sampler

PeGS.rs: Perturbed Gibbs Block Sampler with Reset. Block size = 10.

PMI: Perturbed Multiple Imputation (baseline algorithm). With higher values of
, this is the same as a multiple imputation strategy for fully synthetic data. In PMI, the conditional distributions are modeled using the elasticnet regularized multinomial logistic regression, specifically
glmnet package in R 2.15.3 (Friedman et al., 2010). The variable is regressed on the rest of the variables , and the regularization parameter was tuned basedon crossvalidation:where and are estimated from the data.
where the privacy parameters are given as per synthetic sample. We generated 1000 samples for each case. As a result, we have synthetic datasets and one original dataset.
The utility is first measured using marginal and conditional distributions. Marginal and conditional distributions are measured from the original and synthetic datasets, then the distance is calculated as follows:
Marginal Distance  
Conditional Distance 
where the distance is an inverse surrogate for the utility. Figure 5 and Figure 6 show the RU maps where the utility is measured as the difference in marginal and conditional distributions, respectively. As can be seen, all synthetic datasets become similar to the original data with higher values of . However, for smaller values of , the synthetic data from PeGS.rs are much more similar to the original than the others. The distributional distances of PeGS are slightly smaller than those of PeGS.rs for higher values of . Since values are very small for these privacy parameters, the reset operation of PeGS.rs becomes more noticeable, and it pushes synthetic samples away from the original distributions.
Next, we compare the coefficients from regression models learned on the datasets. We learned logistic and linear models as follows:
where some of the features are changed to numeric features based on their actual meaning. The choice of the target variable was arbitrary, as the goal of this illustrative experiment is to show the applicability of synthetic data in predictive modeling tasks. After learning the coefficients of each model, the distance between the coefficients is measured as follows:
Figure 7 shows the RU map from the regression experiment. As can be seen, the synthetic samples from PeGS.rs provide the most similar coefficients to those from the original data. Figure 8
shows each coefficient deviation from the linear regression example for two different differential privacy levels. Notice that the intercept coefficients from the synthetic datasets tend to overshoot the actual value, while the other feature coefficients tend to undershoot. This is because the perturbation decreases all feature correlations including the correlation between the target and independent variables.
4.4 Estimating Reidentification Risk
Although differential privacy provides a theoretically sound framework for measuring disclosure risks, the measure is originally designed for functions, not data (Dankar and Emam, 2012). For many cases, the measures can be overly conservative or strict for a real dataset. In the statistical disclosure limitation literature, there have been many attempts to measure disclosure risks for synthetic data. Franconi and Stander (2002) proposed a method to quantify disclosure risks for modelbased synthetic data. Their proposed approach checks whether it is possible to recognize a unit in the released data assuming the original data are given to an intruder. This provides a somewhat conservative measure, but is still useful to compare the risks from different release mechanisms. Reiter (2005b) later formalized measuring probabilistic disclosure risk scores for partially or fully synthetic data. Probabilistic disclosure risks are used to asses the risks of the fully synthetic data using Random Forests in Caiola and Reiter (2010).
In this paper, we measure the disclosure risks from two different angles: recoverability of feature values and population uniqueness. First, we examine whether it is possible to infer the values of sensitive feature given demographic information. Specifically, if the intruder knows someone’s age, sex, los, and zip, we would like to measure the likelihood of getting the correct values as follows:
where the inferred values are (1) the most frequent MDC categories and (2) sample means from conditioned synthetic samples. We also measure the population uniqueness based on age, sex, and zip code information. Figure 9 shows the results from this simulated intruder experiment. Private records are more difficult to reconstruct if misclassification rates and absolute errors are high. The probability of recovering MDC is significantly lower than using a simple bootstrap method, but no one method is distinctly better than the other. The absolute distance of hospital charges shows that synthetic data has comparable predictive power with the bootstrap method. Noticeably, the absolute errors are higher when the differential privacy parameters are low, and this finding partially supports our use of differential privacy as a disclosure risk measure. As can be seen in Figure 9 (right), the perturbed synthetic datasets have more unique samples. This is the most distinct characteristics of PeGS compared to other statistical disclosure techniques. Privacy preserving algorithms, such as anonymity and diversity, try to reduce population uniqueness, while PeGS increases the diversity of samples. The former algorithms apply privacypreserving transforms on the original data, while the latter algorithm synthesizes a diversified dataset.
5 Concluding Remarks
In this paper, we proposed a categorical data synthesizer that guarantees prescribed differential privacy or
diversity levels. The use of a hash function allows the Perturbed Gibbs Sampler to handle highdimensional categorical data. The nonparametric modeling of categorial data provides a flexible alternative to traditional (GLMbased) Multiple Imputation techniques. Additionally, this simple representation of conditional distributions is a crucial component of our block sampling algorithm, which enhances the utility of synthetic data given a fixed privacy budget.
The California Patient Discharge dataset was used to demonstrate the analytical validity and utility of the proposed synthetic methodologies. Marginal and conditional distributions, as well as regression coefficients of predictive models learned from the synthesized data were compared to those from the original data to quantify the amount of distortion introduced by the synthesization process. Simulated intruder scenarios were studied to show the confidentiality of the synthesized data. The empirical studies showed that the proposed mechanisms can provide useful riskcalibrated synthetic data.
Currently, PeGS only deals with categorical variables. Numeric variables need to be binned to form categorical variables. Although this approach may be good enough for some applications, bruteforce binning ignores numeric similarity or ordering information. For example, two consecutive values from an ordinal variable are more similar than separated values. Consider a size variable with three values: small, medium, and large. The ordering information states that similarity(small, medium)
similarity(small, big), but this information is lost if we bin the size variable into three (nonordered) categories. Such semantic correlation cannot be captured in the current synthetic and perturbation model.In addition to the perturbation step, the hashing step of PeGS also provides some degrees of privacy protection, although it was originally designed for computational efficiency. When building the PeGS statistical building blocks, each row of the original data is hashed based on , and aggregated with other rows with the same hash key, . Although, in this paper, the privacy guarantee of PeGS is analyzed from the perturbation perspective, this aggregation (or hashing) step should be also incorporated for a tighter guarantee of privacy. The privacy guarantee of PeGS would be affected by different hash resolutions and mechanisms, and this topic needs to be covered in future work.
Although the proposed algorithms show better performance on differential privacy and diversity^{3}^{3}3Experimental results on diversified synthetic data are presented in (Park et al., 2013). measures, they were only marginally better than the perturbed multiple imputation in other probabilistic disclosure risk measures. The differential privacy measure may be too conservative for real data, and the probabilistic measure may not exhaustively capture all the attack scenarios. This is why we provided multiple risk measures. The connection between the differential privacy and disclosure risks should be further addressed to better evaluate the validity and utility of the synthetic data.
In practice, multiple disclosure techniques are sequentially mixed to achieve better protection of the records. For example, PeGS can be applied on top of feature generalization or masking techniques. Furthermore, some features can be modeled using generalized linear models; for example, numeric features. It would be worthwhile to investigate cocktails of different statistical disclosure limitation techniques.
Acknowledgment
We thank Mallikarjun Shankar for helpful discussions.
References
 Abowd and Vilhuber (2008) J. M. Abowd and L. Vilhuber. How protective are synthetic data? Privacy in Statistical Databases, 5262:239–246, 2008.
 Abowd and Woodcock (2001) J. M. Abowd and S. D. Woodcock. Disclosure limitation in longitudinal linked data. Confidentiality Diclosure and Data Access: Theory and Practical Applications for Statistical Agencies, pages 215–277, 2001.
 Barak et al. (2007) B. Barak, K. Chaudhuri, C. Dwork, S. Kale, F. McSherry, and K. Talwar. Privacy, accuracy, and consistency too: A holistic solution to contingency table release. In The 26th ACM SIGMODSIGACTSIGART Symposium on Principles of Database Systems, 2007.
 Caiola and Reiter (2010) G. Caiola and J. P. Reiter. Random Forests for Generating Partially Synthetic, Categorical Data. Transactions on Data Privacy, 3:27–42, 2010.

Centers for Medicare and Medicaid Services (2013)
Centers for Medicare and Medicaid Services.
Medicare Claims Synthetic Public Use Files (SynPUFs).
http://www.cms.gov/ResearchStatisticsDataandSystems/
StatisticsTrendsandReports/SynPUFs/, 2013.  Charest (2012) A.S. Charest. Empirical evaluation of statistical inference from differentiallyprivate contingency tables. Privacy in Statistical Databases, 7556:257–272, 2012.
 Clifton and Tassa (2013) C. Clifton and T. Tassa. On syntactic anonymity and differential privacy. Transactions on Data Privacy, 6(2):161–183, 2013.
 Dale and Elliot (2001) A. Dale and M. Elliot. Proposals for 2001 samples of anonymized records: an assessment of disclosure risk. Journal of Royal Statistical Society: Series A, 2001.
 Dalenius and Reiss (1978) T. Dalenius and S. P. Reiss. Dataswapping: A technique for disclosure control (exteded abstract). In Proceedings of the Section on Survey Research Methods, 1978.
 Dankar and Emam (2012) F. K. Dankar and K. E. Emam. The application of differential privacy to health data. In Proceedings of Privacy and Anonymity in the Information Society (PAIS), 2012.
 Drechsler and Reiter (2010) J. Drechsler and J. P. Reiter. Sampling with synthesis: A new approach for releasing public use census microdata. Journal of the American Statistical Association, 105(492):1347–1357, 2010.
 Duncan and Lambert (1986) G. T. Duncan and D. Lambert. Disclosurelimited data dissemination. Journal of the American Statistical Association, 1986.
 Duncan et al. (2001) G. T. Duncan, S. A. KellerMcNulty, and S. L. Stokes. Disclosure risk vs. data utility: The RU confidentiality map. Technical report, National Institute of Statistical Sciences, 2001.
 Dwork (2006) C. Dwork. Differential privacy. In Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, volume 4052, pages 1–12, 2006.
 Fienberg and McIntyre (2005) S. E. Fienberg and J. McIntyre. Data swapping: Variations on a theme by Dalenius and Reiss. Journal of Official Statistics, 21:309–323, 2005.
 Fienberg et al. (2010) S. E. Fienberg, A. Rinaldo, and X. Yang. Differential privacy and the riskutility tradeoff for multidimensional contingency tables. In The 2010 International Conference on Privacy in Statistical Databases, pages 187–199, 2010.
 Franconi and Stander (2002) L. Franconi and J. Stander. A modelbased method for disclosure limitation of business microdata. Journal of the Royal Statistical Society, Series D, 51(1):51–61, 2002.
 Friedman et al. (2010) J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 2010.
 Fuller (1993) W. A. Fuller. Masking procedures for microdata disclosure limitation. Journal of Official Statistics, 9(2):383–406, 1993.
 Gionis et al. (1999) A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In Proceedings of the 25th Very Large Database, 1999.

Indyk and Motwani (1998)
P. Indyk and R. Motwani.
Approximate nearest neighbors: Towards removing the curse of dimensionality.
InProceedings of 30th Symposium on Theory of Computing
, 1998.  Machanavajjhala et al. (2007) A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubramanian. diversity: Privacy beyond anonymity. Transactions on Knowledge Discovery from Data, 1, 2007.
 Machanavajjhala et al. (2008) A. Machanavajjhala, D. Kifer, J. Abowd, J. Gehrke, and L. Vilhuber. Privacy: Theory meets Practice on the Map. In Proceedings of the 24th International Conference on Data Engineering, 2008.
 McClure and Reiter (2012) D. McClure and J. P. Reiter. Differential privacy and statistical disclosure risk measures: An investigation with binary synthetic data. Transactions on Data Privacy, 5:535–552, 2012.
 McSherry and Talwar (2007) F. McSherry and K. Talwar. Mechanism design via differential privacy. In Proceedings of the 48th Annual Symposium of Foundations of Computer Science, 2007.
 Park et al. (2013) Y. Park, J. Ghosh, and M. Shankar. Lugs: A scalable nonparametric data synthesizer for privacy preserving big health data publication. In International Conference on Machine Learning WHEALTH 2013, 2013.
 Polettini (2003) S. Polettini. Maximum entropy simulation for microdata protection. Statistics and Computing, 13:307–320, 2003.
 Raghunathan et al. (2003) T. E. Raghunathan, J. P. Reiter, and D. B. Rubin. Multiple imputation for statistical disclosure limitation. Journal of Official Statistics, 19(1):1–16, 2003.
 Reiter (2003) J. P. Reiter. Using CART to generate partially synthetic, public use microdata. Journal of Official Statistics, (441462), 2003.
 Reiter (2005a) J. P. Reiter. Releasing multiply imputed, synthetic public use microdata: an illustration and empirical study. Journal of the Royal Statistical Society, Series A, 168:185–205, 2005a.
 Reiter (2005b) J. P. Reiter. Estimating risks of identification disclosure in microdata. Journal of American Statistical Association, 100(472):1103–1112, 2005b.
 Reiter and Drechsler (2010) J. P. Reiter and J. Drechsler. Releasing multiply imputed synthetic data generated in two stages to protect confidentiality. Statistica Sinica, 20:405–421, 2010.
 Rubin (1987) D. B. Rubin. Multiple Imputation for Nonresponse in Surveys. Wiley, 1987.
 Rubin (1993) D. B. Rubin. Discussion: Statistical disclosure limitation. Journal of Official Statistics, 9:462–468, 1993.
 Sakshaug and Raghunathan (2011) J. W. Sakshaug and T. E. Raghunathan. Synthetic data for small area estimation. Privacy in Statistical Databases, 6344:162–173, 2011.
 SoriaCormas and Drechsler (2013) J. SoriaCormas and J. Drechsler. Evaluating the potential of differential privacy mechanisms for census data. In UNECE Conference of European Statisticans, 2013.
 Sweeney (2002) L. Sweeney. kanonymity: a model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.Based Syst., 10:557–570, October 2002. ISSN 02184885.
 Vreeken and Siebes (2008) J. Vreeken and A. Siebes. Filling in the blanks  KRIMP minimisation for missing data. In the 8th IEEE International Conference on Data Mining, pages 1067–1072, 2008.
 Weinberger et al. (2009) K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg. Feature hashing for large scale multitask learning. In Proceedings of the 26th International Conference on Machine Learning, 2009.
 Willenborg and de Waal (2001) L. Willenborg and T. de Waal. Elements of Statistical Disclosure Control, volume 155. Springer, 2001.
 Winkler (2003) W. E. Winkler. A contingencytable model for imputing data satisfying analytic constraints. U.S. Census Bureau Statistical Research Division Research Report Series, 2003.
 Winkler (2010) W. E. Winkler. General discretedata modeling methods for producing synthetic data with reduced reidentification risk that preserve analytic properties. U.S. Census Bureau Statistical Research Division Research Report Series, 2010.
 Yang et al. (2012) X. Yang, S. E. Fienberg, and A. Rinaldo. Differential privacy for protecting multidimensional contingency table data: Extensions and applications. Journal of Privacy and Confidentiality, 4(1):101–125, 2012.
Comments
There are no comments yet.