Differentially Private Hierarchical Group Size Estimation

Consider the problem of estimating, for every integer j, the number of households with j people in them, while protecting the privacy of individuals. Add in a geographical component, so that the household size distribution can be compared at the national, state, and county levels. This is an instance of the private hierarchical group size estimation problem, in which each group is associated with a size and a hierarchical attribute. In this paper, we introduce this problem, along with appropriate error metrics and propose a differentially private solution that generates group size estimates that are consistent across all levels of the hierarchy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 11

page 12

04/02/2018

Differentially Private Hierarchical Count-of-Counts Histograms

We consider the problem of privately releasing a class of queries that w...
02/07/2018

Reasoning in a Hierarchical System with Missing Group Size Information

The paper analyzes the problem of judgments or preferences subsequent to...
07/18/2020

Differentially Private Mechanisms for Count Queries

In this paper, we consider the problem of responding to a count query (o...
06/25/2020

Differentially Private Health Tokens for Estimating COVID-19 Risk

In the fight against Covid-19, many governments and businesses are in th...
04/16/2020

Differentially Private Linear Regression over Fully Decentralized Datasets

This paper presents a differentially private algorithm for linear regres...
01/07/2021

Differentially private depth functions and their associated medians

In this paper, we investigate the differentially private estimation of d...
06/05/2020

Differentially private partition selection

Many data analysis operations can be expressed as a GROUP BY query on an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The publication of differentially private tables is an important area of study with applications to statistical government agencies, like the U.S. Census Bureau, that collect and publish economic and demographic data about the population. Most work has focused on ordinary histograms – for example, generating counts of how many people are in each combination of age, state, and business sector.

However, an important class of queries that has been under-studied are hierarchical count-of-counts histograms

, which are used to study the skewness of a distribution. The 2010 Decennial Census published 33 tables related to such queries

[9], but these tables were truncated because formal privacy methods for protecting such tables did not exist. To get a count-of-counts histogram, one first aggregates records in a table R into groups (e.g., A=SELECT groupid, COUNT(*) AS size FROM R GROUPBY groupid) and then forms a histogram on the groups (H=SELECT size, COUNT(*) FROM A GROUPBY size). Thus can be treated as an array, where is the number of groups of size . When there is a hierarchical attribute associated with each group such as location, the goal is to estimate a histogram for every element in the hierarchy and enforce consistency: the sum of the histograms at the children equals the histogram at the parent. For example, consider a table Persons(personid, groupid, location), with hierarchy national/state/county on the location attribute. A hierarchical count-of-counts histogram on this table would ask: for each geographic region (national, state, county) and every , how many households (i.e. groups) in that region have people.

Closely related to count-of-counts histograms are unattributed histograms (also known as frequency lists) [19, 8], which are not hierarchical, and whose structure is not well-suited to express hierarchical constraints on a secondary attribute like location. Unattributed histograms report the number of rows in the smallest group, followed by the number of rows in the second smallest group, etc., i.e. =SELECT COUNT(*) AS size FROM R GROUPBY groupid ORDERBY size. One can convert unattributed histograms into count-of-counts histograms and vice-versa, so differentially private unattributed histograms [19, 24] could be used to generate differentially private count-of-counts histograms (and vice versa, see Section 4.2).

However, the main focus of our work is hierarchies in which every group (e.g., household) belongs to exactly one leaf (e.g., county).111Thus groupidlocation is a functional dependency. Such hierarchies are natural and important for count-of-counts histograms – all of the queries in the 2010 Census were hierarchical (by location). However, they are not well supported by existing methods. Currently, the only known way to get consistent differentially private count-of-counts or unattributed histograms at every level of a hierarchy is to estimate them only at the leaves and then aggregate them up the hierarchy. However, our experiments show this approach introduces high error at non-leaf nodes (like in other hierarchical problems [19, 28]).

An alternative is to separately obtain differentially private estimates at every node in the hierarchy and then postprocess them to be consistent (i.e. aggregating information from the children should produce the corresponding histogram at

gid size loc.
1 4 a
2 2 b
3 1 a
4 1 b

the parent). However, this is far from trivial. To see why, suppose we aggregate the Persons table by group id (gid) to obtain the following table. At the root, the count-of-counts histogram is (i.e., 2 groups of size one, 1 group of size two, 0 of size three, 1 group of size four) and the corresponding unattributed histogram is . At node , the count-of-counts histogram is and unattributed histogram is ; at node they are and , respectively.

The unattributed histogram is not additive: , thus existing techniques for enforcing consistency in a hierarchy[19, 28] do not apply. The count-of-counts histograms are additive, but consistency is still nontrivial for the following reasons. The standard approach is to formulate “consistency” as an optimization problem [19].222In this case, given differentially private estimates of count-of-counts histograms, modify them as little as possible so that children add up to their parents. Generic solvers are slow due to the number of variables involved so fast specialized algorithms, like mean-consistency [19, 28], have been proposed. However, mean-consistency cannot solve our problem. First, it can produce negative and fractional answers, both of which are invalid query answers (for a full list of requirements, see Section 3).333Negative answers can be obtained from the final step of the mean-consistency algorithm that subtracts a constant from the counts at each node’s children.

Second, it relies on an estimate of query variances that are difficult to obtain. Instead, we propose a different consistency algorithm based on an efficient and optimal matching of groups at different levels of the hierarchy.

Aside from consistency, there are other challenges for differentially private hierarchical count-of-counts queries. One such challenge is properly designing the error metric (this is true even without a hierarchy). Standard measure like or (sum-squared error) distance between the true count-of-counts histogram and the differentially private histogram are inapplicable. For instance, suppose the true data had 20 groups and all had size 1. Consider two different estimates. Estimate has 20 groups, all having size 2. Estimate has 20 groups all having size 10. and both have the same error and error, but clearly is better than because its groups are closer in size to . Earth-mover’s distance [30] is more appropriate and turns out to be efficient to compute for our problem.

Another challenge is obtaining the initial differentially private count-of-counts estimates at each node of the hierarchy (i.e. the initial estimates on which we will run consistency algorithms). Naively adding noise to each element of results in poor performance. However, we show that some unattributed histogram algorithms [19, 24] can be adapted for this task. Nevertheless, most of the time a different approach based on count-of-counts histograms works better empirically.444 Of course, one could also use generic tools like Pythia [Kotsogiannis2017:Pythia] or the technique of Chaudhuri et al. [11] for selecting between the two approaches for estimating count-of-count histograms at each level in the hierarchy. To summarize, our contributions are:

  • We introduce the hierarchical differentially private count-of-counts histogram problem.

  • We propose new and accurate algorithms for the non-hierarchical version of this problem.

  • For the hierarchical version, we propose algorithms that force consistency between estimates at different levels of the hierarchy.

  • An evaluation on a combination of real and partially-synthetic data validates our approach. The partially synthetic data are used to extend a real dataset, in which group sizes were truncated at a small value because, at the time, it was not known how to publish full results while protecting privacy.

This paper is organized as follows. We review related work in Section 2. We formally define the problem and notation in Section 3. We propose algorithms for the non-hierarchical version of the problem in Section 4. We show how to obtain consistency in the hierarchical version in Section 5. We present experiments in Section 6. We discuss conclusions and future work in Section 7.

2 Related Work

Differentially private histograms have been the main focus of private query answering algorithms. Starting with the simplest application, adding Laplace noise to each count [14], these methods have progressively become more sophisticated and data aware to the point where they take advantage of structure in the data (such as clusters) to improve accuracy (eg., [3, 23, 35, 33]). Extensions for optimizing various count queries have also been proposed [23, 10, 18].

Ordinary histograms have also been extended with hierarchies. The addition of hierarchies makes data release problems more challenging as well as more applicable to real-world uses. One of the earliest examples of hierarchical histograms was introduced by Hay et al. [19], in which the data consist of a one-dimensional histogram that is converted to a tree, where each node represents a range and stores one value (count of the data points in that range) and the value at a node must equal the sum of the values stored by the children. Qardaji et al. [28] determined a method to compute the fanout of the tree that is approximately optimal for answering range queries. Other, more flexible, partitioning of the data have also been studied (e.g., [34, 12, 32]). Ding et al. [13] provided extensions to lattices in order to answer data cube queries instead of just range queries. In our setting, the hierarchies are already provided and the goal is to get consistent count-of-counts histograms at each level of the hierarchy. The algorithms used for consistency in ordinary histograms do not satisfy the requirements of count-of-counts histograms (as explained in Sections 3 and 5), and so new consistency algorithms are needed.

The work most closely related to our problem are unattributed histograms [19, 8] which are often used to study degree sequences in social networks [24, 21]. Unattributed histograms are the duals of count-of-counts queries (they count people rather than groups) and can be used to answer queries such as “what is the size of the largest group?” They are much more accurate than the naive strategy of adding noise to each group and selecting the largest noisy group [19]. Although unattributed histograms do not have a hierarchical component, our techniques solve the hierarchical version of this problem because count-of-counts histograms can be converted to unattributed histograms.

3 Problem Definition

Consider a database consisting of these 3 tables: Entities(entity_id, group_id), Groups(group_id, region_id), and Hierarchy(region_id, level, level,…, level), where the entity_id and group_id are randomly generated unique numbers. Every entity belongs to a group and every group is in a region. Regions are organized into a hierarchy (as encoded by the table Hierarchy), where level is the root containing all regions, level 1 subdivides level 0 into a disjoint set of subregions, and, recursively, level subdivides the regions in level . For example, level can be an entire country, level can be the set of states, and level can be the set of counties. We let represent the hierarchy and we will use to denote a node in the hierarchy. For each region , we let Level denote its ancestor in level . For example, if is a region corresponding to Fairfax County, then Level“Virginia” and Level“Fairfax County.” We add the restriction that a group cannot span multiple leaves of the region hierarchy (i.e. each group is completely within the boundaries of a leaf node).

We now consider what information is public and what information is private.

  • Hierarchy – this table only defines region boundaries and so is considered public.

  • Groups – the group id is a random number so the only information this table provides is how many groups are in each region. We consider this table to be public to be consistent with real world applications such as at the U.S. Census Bureau, where the number of households and group quarters facilities in each Census block is assumed (by the Bureau) to be public knowledge because it is easy to obtain by inspection.555If one wishes to make the Groups table private, our methods can be extended. The most straightforward approach is to first estimate the number of groups in each region by adding Laplace noise to each count. These estimates can be made consistent by solving a nonnegative least squares optimization problem. Since there is only one number per region, it is a relatively small problem that can be solved with off-the-shelf optimizers. Once the counts are generated they can be used with our algorithm.

  • Entities – this table contains information about which entities (e.g., people) are in the same group. We treat it as private.

For example, in Census data, groups can be housing facilities (households and group quarters) and entities are people. In taxi data, groups could correspond to taxis and entities to pick-ups of passengers.

Each node in the hierarchy has an associated group-size histogram (or simply when is understood from the context). is the number of groups, in the region associated with , that have entities in them. We also use to represent the (public) number of groups in since that can be derived from the public Groups table. There are two other convenient representations of the count-of-counts histogram :

  • : This is the cumulative sum histogram, defined as , which is the number of groups of size less than or equal to . Note that the last element of is therefore (the total number of groups in ). For example, if then .

  • : This is the unattributed histogram. is the size of the smallest group in . Thus the dimensionality of is . Note that is an unattributed histogram in the terminology of Hay et al. [19]. For example, if then (since there are two groups of size 1, one of size 2, and two of size 3).

The conversion in representation from to to (and vice versa) is straightforward and is omitted.

The privacy definition we will be using is -differential privacy [14] applied at the entity level. More specifically:

Definition 1 (Differential Privacy)

Given a privacy loss budget , a mechanism satisfies -differential privacy if, for any pair of databases , that contain the public Hierarchy and Groups tables, and differ by the presence or absence of one record in the Entities table, and for any possible set of outputs of , the following is true:

Thus the differentially private hierarchical count-of-counts problem can be defined as:

Problem 1

Let be a hierarchy such that each node has an associated count-of-counts histogram . Develop an algorithm to release a set of estimates while satisfying -differential privacy along with the following desiderata:

  • [Integrality]: is an integer for all and .

  • [Nonnegativity]: for all and .

  • [Group Size]: for all .

  • [Consistency]: for any , .

These constraints ensure that the count-of-counts histograms satisfy all publicly known properties of the original data.

3.1 Error Measure

The error measure is an important aspect of the problem as we would like to quantify the “distance” between and . Ideally, we would like to measure this distance as the minimum number of people that must be added or removed from groups in to get .

The standard error measures of Manhattan distance

or sum-squared error do not capture this distance measure. To see why, suppose meaning that all 100 groups have size . Consider two estimates (where all groups have size 2) and (where all groups have size 5). We see that and . However, we should consider to be closer to than is. The reason is that if we add one extra person into each group in , we would obtain . On the other hand, to obtain , we would need to add 4 people to each group.

Thus, the appropriate way to measure distance between and is the Earthmover’s distance (emd) as it precisely captures the number of people that must be added to or removed from groups in in order to obtain . Normally, computing emd is not linear in the size of an array [30]. However, it can be computed in linear time for our problem by using the cumulative histograms.

Lemma 1 ([27])

The earthmover’s distance between and can be computed as , where (resp., ) is the cumulative histogram of (resp., ). It is the same as the norm in the representation when the number of groups is fixed.

Our algorithms optimize error according to this metric.

3.2 Privacy Primitives

We now describe the privacy primitives which serve as building blocks of our algorithm. One important concept is the sensitivity of a query.

Definition 2

Given a query

(which outputs a vector), the global sensitivity of

, denoted by is defined as:

where the maximum is taken over all databases that contain the public Hierarchy and Groups tables, and differ by the presence or absence of one record in the Entities table.

The sensitivity is used to calibrate the scale of noise needed to achieve differential privacy. We can use the Geometric Mechanism [16] instead of the Laplace Mechanism [14]

, because we want our final counts to be integers. The Geometric mechanism is also preferable to the Laplace mechanism as it has lower variance, and is not susceptible to side-channel attacks when implemented in floating point arithmetic

[26].

Definition 3 (Geometric Mechanism [16])

Given a
database , a query that outputs a vector, and a privacy loss budget , the geometric mechanism adds independent noise to each component of using the following distribution: (for etc.). This distribution is known as the double-geometric with scale .

Lemma 2 ([14, 16])

The Geometric Mechanism satisfies -differential privacy.

4 The Base Case: Non-hierarchical Count-of-Counts

In order to create consistent estimates of for all nodes in the hierarchy , we first create estimates of independently and then post-process them for consistency. In this section, we discuss how to generate a differentially private estimate for a single node (i.e. temporarily ignoring the hierarchy). In Section 5, we show how to combine the estimates (for all ) to satisfy consistency.

In the rest of this section, we focus on a single node in , so to simplify notation we use the notation instead of . We first discuss a naive strategy for estimating , along with 2 more robust strategies.

4.1 Naive Strategy

The naive strategy for estimating is to use Definition 3 and add double-geometric noise with scale to each cell of . However, because the maximum group size is not known, the length of is private. Thus we must determine a maximum non-private size and make the following modifications to . If all groups in have less than people, then is extended with 0’s until its length is . If some groups of have size more than , we change the sizes of those groups to . Call the resulting histogram . We then add double-geometric noise with scale to each cell of . This strategy satisfies -differential privacy because the sensitivity of is 2:

Lemma 3

The global sensitivity of is 2.

For any group size , adding a person to a group of size decreases by one and increases by one. When , there is no change to . Similarly, removing a person from a non-empty group of size increases by one and decreases by one (for a total change of 2). If , there is no change to .

Let be the noisy version of . The numbers in can be negative. Thus we post-process to obtain an estimate by solving the following problem (using a quadratic program solver):

To get integers, we set , round the cells with the largest fractional parts up, and round the rest down.

This approach has several weakness. First, there are many indexes where and, after noise addition, many of them (a fraction of indices) will have non-zero entries. Then roughly half of them end up with positive counts due to nonnegativity constraints. As a result, in our experiments, this method had several orders of magnitude worse error than the algorithms we describe next. Second, earthmover’s distance is equivalent to measuring error between the cumulative sums of the estimate and the original data : . Since noise is added to every cell independently, the component of the error depends on the sum of random variables (whose total variance is ). Hence, if has components, one would expect the error to be .

4.2 Unattributed Histograms

The next approach is to use algorithms for unattributed histograms [19]. We can convert into the representation , where is the size of the smallest group. The length of is , so potentially this can be a very large histogram. One of the properties of is that it is non-decreasing. Hence we can achieve -differential privacy by adding independent double-geometric noise with scale to each element of to obtain , because the sensitivity of unattributed histograms is 1 [19]. However, is no longer non-decreasing (or even nonnegative), hence, following [19, 24], we post-process it by solving the following optimization problem with either or :

Then we round each entry of to the nearest integer. Since the result was non-decreasing before rounding, it will remain non-decreasing after rounding. Then from the resulting estimate, we convert it back to by counting, for each , how many estimated groups have size .

This optimization problem is known as isotonic regression. When it can be solved in linear time using the min-max algorithm [5], pool-adjacent violators (PAV) [4, 29], or a commercial optimizer such as Gurobi [17]. When

it can be converted to a linear program but it runs much slower. In our experiments, we used

because can have length in the hundreds of millions. For these sizes, the quadratic program is much faster to solve using PAV.

One observation we had is that this method is very good at estimating large group sizes, and most of its error comes from estimation errors of small group sizes (see Figure 1).

4.3 Cumulative Histograms

Another strategy is to use , the cumulative sum of . Since the cumulative sum is non-decreasing, we again can add noise and use isotonic regression. As in the naive case, we must determine a public upper bound666Recall is an upper bound on the maximum number of people in a group. This method is not very sensitive to K – in the experiments we used on datasets where the largest group had around 10,000 people – an order of magnitude difference and still the estimated size of the largest group ended up being around 10,000. Thus if we have no prior knowledge, we can estimate as follows. Set aside a small privacy budget, since does not need much accuracy (e.g., ). Let be the number of people in the largest group. Estimate as Laplace

– i.e. add 5 standard deviations to a noisy estimate of

so that on and modify appropriately (as in Section 4.1) before computing the cumulative sum. We saw in Section 3.1 that error in estimation of count-of-counts histograms is measured as the difference between cumulative size histograms. Thus it makes sense to privatize these histograms directly.

Lemma 4

The global sensitivity of is 1.

Adding one person to a group of size means there is one less group of size and one more group of size . Thus decreases by one but remains the same (i.e., number of groups does not change). None of the other entries of change. Similarly, removing a person from a group of size means increases by one and does not change (and neither do any other entries). Thus the overall change is . Thus we can satisfy differential privacy by adding independent double-geometric noise with scale to each cell of to get . We then postprocess by solving the following optimization problem (using or ):

These problems can again be solved using PAV (in the case of ) or with a commercial optimizer (in the cases of or ). In our experiments, we found that the version of the problem (with ) performs better than the version (with ). This is consistent with prior observations on unattributed histograms [24]. A Bayesian post-processing is known to further reduce error, but we did not use it because it scales quadratically with the size of the histogram (the sizes of our histograms make this prohibitively expensive) [24]. Finally, we then round to nearest integer and convert it back into a count-of-counts histogram . We found that the version of the problem mostly returns integers, so rounding is minimal.

Unlike the method, we found that this method is accurate for small group sizes but less accurate for large groups.

Figure 1: Error visualization. -axis: cumulative sum of group size counts. -axis: estimation error at these sizes. Top: method (Section 4.2). Bottom method (4.3). Errors from the method are concentrated around small group sizes. Errors from are more dense throughout the rest of the sizes.

5 Hierarchical Consistency

Our overall algorithm for the hierarchical problem is shown in Algorithm 1. The hierarchy has levels (including the root), so we use budget for estimating for each (using either the method or method from Section 4). Then we run a consistency post-processing algorithm to obtain new histograms such that the histograms at each parent equals the sum of the histograms at its child nodes.

Typically, for a hierarchy with noisy data at each node, one would use the mean-consistency algorithm [19] to get consistent estimates with the property that the sum of data at child nodes equals the data at the parent node. In the group-size estimation problem, this algorithm will not produce outputs satisfying the problem requirements (Section 3). The reason is the following. The mean-consistency algorithm solves the global optimization:

This algorithm does not meet the necessary requirements for several reasons. First, it outputs real (and even negative777The solution given by mean-consistency can be negative even if all input numbers are positive. We verified this phenomenon using that algorithm and also with commercial optimizers. Intuitively, this happens because the mean-consistency algorithm [19] has a subtraction step, in which a constant number is subtracted from each child so that they add up to the parent total. For children with small counts, this subtraction gives negative numbers.) numbers while our problem requires the conditions that is an integer, is nonnegative, and (the private estimator must match the publicly known groups table). Second, it requires knowledge of variances [28] – in our case, the variances of for every and . Not only is this variance different for every and , but it has no closed form (because of the isotonic regression used to generate ).

Our proposed solution converts back into the unattributed histogram . It estimates the variance of the size of each group (Section 5.1); e.g., variance estimates for , not . It then finds a -to- optimal matching between groups at the child nodes and groups at the parent node (Section 5.2). This means that each group has a size estimate from the child and a size estimate from the parent. It merges those two estimates (Section 5.3). The result is a consistent set of estimates that satisfy all of the constraints in our problem. The overall approach that puts these pieces together is shown in Algorithm 1, while the specifics are discussed next.

Input: privacy loss budget , Hierarchy with root at level 0
1 for  do
2       for each node in level of  do
             or method  // Sec 4
3             convert to
4      
/* Consistency Step */
5 for every node in  do
       EstVariance(// Section 5.1
6      
7root; rootroot for  do
8       for every node in level of  do
             /* Matching alg in Section 5.2 */
9             Match(,) for each  do /* Sec 5.3 */
10                   Update
11            
12      
13Convert to for all leaf nodes for  do /* Back-Substitution */
14      
return
Algorithm 1 Top-down Consistency

5.1 Initial Variance Estimation

The first part of our algorithm produces a differentially private count-of-counts histogram for every node . We then convert it into the unattributed histogram . For each , we need an estimate of the variance of the largest group . Now, the count-of-counts histogram is obtained either from the method (Section 4.2) or the cumulative histogram method (Section 4.3) and so the variance depends on which method was used.888Generally, works well for all levels. Users preferring fine-grained control can use generic algorithm selection tools [Kotsogiannis2017:Pythia, 11].

5.1.1 Variance estimation for the method

Recall that in the unattributed histogram method, we obtained a noisy array (noise was added to ) and performed isotonic regression on it to get . Since we need an estimate of the variance of , we can use the following special properties of isotonic regression solutions [4]. As shown in Figure 2, isotonic regression is equivalent to taking a noisy 1-d array, partitioning the array, and assigning the same value to each element within a partition. In the case of isotonic regression, this value is the average of the noisy counts from the partition. In the case of , this value is the median of the noisy counts in the partition. Thus the variance of each cell in the resulting array depends on the variance due to partitioning and the variance due to averaging the noisy counts. We cannot quantify the variance due to partitioning. However, the variance due to averaging noisy counts can be estimated.

Each noisy count is generated by adding noise from the double-geometric distribution with scale

. Its variance can be approximated by the variance of the Laplace distribution with the same scale. Namely, the variance is . In a partition of size (and isotonic regression), the value assigned to that partition is the average999In the case of regression, the value assigned to the partition is the median of noisy values; the variance of the median is difficult to compute, so we again estimate it as of noisy values and so has variance . The partitions that were created by the solution of the isotonic regression are easy to determine – they are simply the consecutive entries in the solution that have the same value.

Thus, our estimate of the variance for group in is computed as follows. Let be the number of groups that were in the same partition as in the solution (i.e. the number of entries in that equal ). Set the variance estimate for the largest group to be the following in Line 1 of Algorithm 1:

Figure 2: Isotonic Regression converts a noisy histogram that is no longer nondecreasing into a nondecreasing histogram by partitioning and averaging within each partition.

5.1.2 Variance for the cumulative histogram method

The cumulative histogram method also uses isotonic regression, but uses it to create while we need estimates of the variance of . Since the representation is a nonlinear transformation of the representation, we must use a different way to estimate variance.

We know that before the isotonic regression in the method, independent noise with scale was added to each cell of so we (over)estimate the variance of as . The estimated number of groups of size is so it would have variance . Dividing by the number of estimated groups of size , this gives as the estimated variability in each group whose size is estimated to be . Thus we set our estimate of the variance of the largest group to be the following in Line 1 of Algorithm 1:

5.2 Optimal Matching

Now, every group belongs to a node in each level of the hierarchy (e.g., a household in Fairfax county is also a household in Virginia). Since group size distributions are estimated independently at all levels of the hierarchy (lines 11 in Algorithm 1), this causes several problems. First, each group has several size estimates (a household has a certain estimated size from the Fairfax County estimates and another estimated size from the Virginia estimate). Second, from these separate estimates, we don’t know which group in one level of the hierarchy corresponds to which group at a different level of the hierarchy (e.g., is it possible that the largest household in Virginia, with estimated size 12 is the same as the largest household in Fairfax County, with estimated size 13?).

Thus, to make the group size distribution estimates consistent, we need to estimate a matching between groups in and groups in the children of , as in Figure 3. For privacy reasons, such a matching must only be done using the differentially private data generated in Line 1 of Algorithm 1. In this section we explain how to perform this matching and in the next section we explain how to reconcile the different size estimates.

Formally, a matching is a function that inputs a node and an index and returns a node and index with the semantics that the smallest group in is believed to be the same as the smallest group in child of . We must estimate this matching using only differentially private data (e.g., for each node ). We first convert each into the unattributed representation , where is the size of the smallest group in .

For each node , we set up a bipartite weighted graph as shown in Figure 3. There are nodes on the top half. We label them as . Each node on the bottom half has the form , where is a child of and is an index into . Between every node and there is an edge with weight which measures the difference in estimated size between the smallest group in and the smallest group in .

Our desired matching is then the least cost weighted matching on this bipartite graph. Sophisticated matching algorithms (e.g., based on network flows) can find an optimal matching, but they have time complexity at least [25]. In our case, can be in the millions (e.g., there are over 100 million households in the U.S.).

There exists a well-known 2-approximation algorithm for matching, which adds edges to the matching in order of increasing weight [25]. However on our graph it would run in time as there are edges and they need to be sorted. Instead, we take advantage of the special weight structure of our edges to produce an optimal matching algorithm with time complexity .

Figure 3: Consistency Matching Illustration

5.2.1 Optimal Matching Algorithm

Input: and for
// Unmatched nodes from top of bipartite graph
Top // Unmatched nodes from the bottom
1 Bot while Top do
       // Smallest unmatched group size in Top
       // Smallest unmatched group size in Bot
       // All groups from Top with min size
       // All groups from Bot with min size
2       if  then
             // all nodes in can be matched now
3             For each , assign it arbitrarily to a unique Remove from Top and from Bot
4       else
             // Some nodes in can be matched now
5             num # records in belonging to child . for each child of  do
6                   assign of the nodes in to arbitrary nodes in from child Remove matched nodes from Top and Bot
7            
8      
return the matching
Algorithm 2 Matching

The algorithm is shown in Algorithm 2. To achieve the desired time complexity, we sort the groups in in increasing order and do the same to the set of all of groups in all of the children (hence the cost). We then proceed by matching the smallest unmatched group in to the smallest unmatched group among any of its children.

Normally there are many groups from with the same size and many child nodes with the same size that they can match to. For example, there can be 300 groups of size in and the child nodes together can have 200 groups of size . Thus we can match 200 of the groups from the parent with the 200 groups at the children. The specifics of which group of size 1 in the parent matches which group of size 1 in the children is completely unimportant (since the groups of size 1 at the parent are completely indistinguishable from each other) and hence can be done arbitrarily as in Line 2 in Algorithm 2. After this matching, the remaining 300-200= 100 groups of size in the parent will then be matched with groups of size 2 in the child nodes, etc.

Sometimes, not all of the groups in the children can be matched at once. For example, the parent may have 300 groups of size , but there are 3 children with child having 200 groups of size 1, having 100 groups of size , and having groups of size 1. In this case, the assignments are done proportionally:101010Sometimes the proportions tell us that the groups at the parent should be matched with groups at child , at child , etc. (with ) but the are real numbers instead of integers. In this case we find the unique k such that rounding up the with the k largest fractional parts and rounding the rest down gives integers that sum up to . 50% of the parent’s groups of size 1 are matched to the corresponding groups in , 25% are matched to the corresponding groups at , and the remaining 25% are matched to the groups in (e.g., line 2 in Algorithm 2). In this situation, all of the size 1 groups in the parent will have been matched, but there will be some size 1 groups in the children that are not yet matched. The next iteration of the algorithm will try to match them to size 2 groups in the parent.

Lemma 5

If the weight between edges of the form and equals , then Algorithm 2 finds the optimal least-cost perfect matching.

We say that two matchings and have a trivial difference at edge if all of the following hold:

  1. matches to .

  2. there are nodes such that matches to and to .

  3. if is modified to match to and to , then the cost of doesn’t change.

Two matchings and have a non-trivial difference at edge if is part of matching but not , and it is not a trivial difference.

Let be the matching returned by Algorithm 2. Assume, by way of contradiction, that is not optimal. In this case, there will always be optimal matchings with no trivial differences with (i.e. all differences will be nontrivial). This is because we can make trivial differences at an edge disappear by performing the modification discussed above which doesn’t cause the cost to change.

When examining the order in which pairs of matched nodes are added to by Algorithm 2, for any optimal matching with no trivial differences, there is a first time at which and have a non-trivial difference, e.g., they agree on the first matchings but disagree on the . Let be an optimal matching that has no trivial differences, and has the largest possible time for its first non-trivial difference. Let the edge on which and first differ be . This means that matches to some and matches some to .

By construction of the algorithm, the following are true:

otherwise either node would have been matched by the algorithm before , triggering a non-trivial difference earlier, or would have been matched by the algorithm before , causing a non-trivial difference earlier; in either case, this would contradict the fact that is the first non-trivial difference.

Now there are only four possible cases:

Cases 1 and 3 are symmetric (they involve interchanging the top and bottom of the bipartite graph) and Cases 2 and 4 are also symmetric in the same way. Thus the same proof technique for Case 1 will apply to Case 3, and the same technique for Case 2 will apply to Case 4.

Case 1: . Based on this ordering, we first list a few identities:

(1)
(2)
(3)
(4)

Now we see that the sum of Equations 1 and 2 equals the sum of Equations 3 and 4. Thus we can change the optimal matching by matching to (instead of the original connection to ) and match to and the cost of the matching will stay the same (contradicting the choice of – that it is not supposed to have trivial differences from ; in fact, such reassignment of edges is how trivial differences are removed).

Case 2:

From this ordering it is clear that:

Thus we can change the optimal matching by matching to (instead of the original connection to ) and match to and the cost of the matching will either decrease (contradicting optimality of ) or stay the same (contradicting the choice of – that it is not supposed to have trivial differences from ).

Case 3: . Symmetric to case 1, so omitted.

Case 4: . Symmetric to case 2, so omitted.

5.3 Merging Estimates

Given a node , the matching algorithm assigns one group in to one group in some child of (i.e. it says that, for every , the smallest group in matches the smallest group in child of ). This means that for every group, we have two estimates of its size: and as well as corresponding estimates of its variance and . There are two possible ways of reconciling these estimates.

Naive strategy. The simplest way is to simply average and . This approach would be valid if the variance estimates were not accurate – recall that it is not possible to estimate the variances exactly, so they needed to be approximated. However, as we show in our experiments, a weighted averaging based on the estimated variance outperforms this strategy.

Variance-based weighted strategy. If we have two noisy estimates of the same quantity (e.g., and ), along with their variances and , it is a well-known statistical fact (e.g., see [19]) that the optimal linear way of combining the estimates is to estimate the size as a weighted average, where the weights are inversely proportional to the variance:

(5)

and the variance of this estimator is

(6)

Thus we would update the size of the largest group at child using Equation 5 and its new variance as Equation 6. This estimator is preferable to the naive strategy when the variance estimates are accurate.

The size estimates are then rounded. After the size estimates at the children are updated, the top-down algorithm continues matching the groups in each child with the groups at the children of . Once the groups at the leaves are updated, the resulting sizes at the leaves are treated as the final estimates.

5.4 Privacy

Theorem 1

Algorithm 1 satisfies -differential privacy.

Privacy is easy to analyze because the algorithm separates the differentially private data access from the post-processing. Specifically, the only part of the algorithm that touches the sensitive data occurs in Lines 1 through 1. It uses sequential composition across levels of the hierarchy. Thus each of the levels is assigned of the privacy budget. Within each level there is parallel composition because adding or removing one person from a group only affects the node that contains that group (and none of the sibling nodes). The count-of-counts histograms produced at each node use either the method of Section 4.2 or Section 4.3) with privacy budget and they scale the noise correctly to the global sensitivity, as discussed in those sections.

The rest of the top-down algorithm is completely based on the differentially private results of of Lines 1 through 1. The conversions between and are trivial manipulations of the histogram format (they do not touch the original data). The variance estimation is based on and . The matching algorithm only uses for each node, and the method for merging estimates only uses (for each node) and the associated variance estimates (which were computed from and ). Since post-processing differentially private outputs still satisfies differential privacy [14], the overall algorithm satisfies differential privacy.

6 Experiments

In this section, we present our experiments, which were conducted on a machine with dual 8-core 2.1 GHz Intel(R) Xeon(R) CPUs and 64 GB RAM.

6.1 Datasets

We used 4 large-scale datasets for evaluation.

Partially synthetic housing. Individuals live in households and group quarters. The number of individuals in each facility is important but this information was truncated past households of size 7 in the 2010 Decennial Census, Summary File 1 [9]. Thus we created a partially synthetic dataset that mirrors the published statistics, but adds a heavy tail as would be expected from group quarters (e.g., dormitories, barracks, correctional facilities). This was done for each state by estimating the ratio households of size 7/

households of size 6, and then randomly sampling (with a binomial distribution) the number of groups of size

so that the same ratio holds (in expectation) between number of groups of neighboring sizes. Then 50 outliers groups are chosen with size uniformly distributed between 1 and 10,000. The hierarchy in this levels are National and State (50 states plus Puerto Rico and the District of Columbia). The third level is County, which we obtained by randomly assigning groups at the state to their counties (the assignment was proportional to county size).

NYC taxi: We use 143,540,889 Manhattan taxi trips from the 2013 New York City taxi dataset [2]. An anonymized taxi medallion (e.g., a taxi) is considered a group and the size of the group is the number of pickups it had in a region. The region hierarchy is the following. Level 0: Manhattan; level 1: upper/lower Manhattan; level 2: 28 neighborhoods from NTA boundary [1].

Race distribution (white and Hawaiian): For each block, based on 2010 Census data (in Summary File 1 [9]), we count the number of whites and number of native Hawaiians that live in the block. Hence block is treated as a group. The hierarchy is National, State, and County. We performed evaluations on all 6 major race categories recorded by the Census, but omitted the rest due to space restrictions.

The statistics for our datasets are the following.
Data # groups # people/trip # unique size Synthetic 240,908,081 605,304,918 2352 White 11,155,486 226,378,365 1916 Hawaiian 11,155,486 540,383 224 Taxi 360,872 130,962,398 3128

For some count-of-counts estimation methods, such as the cumulative sum method, one needs to specify a public maximum group size . We set as a conservative estimate (for example, in our partially synthetic housing dataset, the true max size was around , an order of magnitude smaller). For our 2-level hierarchy experiments on Census related data, we used National/State. For 3 level, we used West Coast/State/County. All numbers plotted are averaged over 10 runs.

Figure 4: Merging estimates using weighted average vs. normal average. x-axis: privacy budget per level.

6.2 Evaluation

Evaluation metric. For each level of the hierarchy, we evaluate Earthmover’s distance (emd) as discussed in Section 3.1, per node in the level, in order to see error at each level. We do not aggregate error across all levels of the hierarchy as there is no principled way of, for example, weighting the importance of error at the state level compared to the national level. Each error measure is averaged across 10 runs. The standard deviation of the average is then the empirical standard deviation (which gives std for one random run) divided by (which then gives the std for the mean of 10 runs). We plot 1 std error bars on each figure.

Algorithm selection. We evaluate a variety of choices for generating hierarchical count-of-counts histograms. F