DeepAI

Optimal disclosure risk assessment

Protection against disclosure is a legal and ethical obligation for agencies releasing microdata files for public use. Consider a microdata sample of size n from a finite population of size n̅=n+λ n, with λ>0, such that each record contains two disjoint types of information: identifying categorical information and sensitive information. Any decision about releasing data is supported by the estimation of measures of disclosure risk, which are functionals of the number of sample records with a unique combination of values of identifying variables. The most common measure is arguably the number τ_1 of sample unique records that are population uniques. In this paper, we first study nonparametric estimation of τ_1 under the Poisson abundance model for sample records. We introduce a class of linear estimators of τ_1 that are simple, computationally efficient and scalable to massive datasets, and we give uniform theoretical guarantees for them. In particular, we show that they provably estimate τ_1 all of the way up to the sampling fraction (λ+1)^-1∝ ( n)^-1, with vanishing normalized mean-square error (NMSE) for large n. We then establish a lower bound for the minimax NMSE for the estimation of τ_1, which allows us to show that: i) (λ+1)^-1∝ ( n)^-1 is the smallest possible sampling fraction; ii) estimators' NMSE is near optimal, in the sense of matching the minimax lower bound, for large n. This is the main result of our paper, and it provides a precise answer to an open question about the feasibility of nonparametric estimation of τ_1 under the Poisson abundance model and for a sampling fraction (λ+1)^-1<1/2.

• 14 publications
• 34 publications
• 9 publications
• 1 publication
04/03/2018

Simple estimators for network sampling

Some conceptually simple estimators for network sampling are introduced....
10/27/2022

The Optimal Sample Size in Crosswise Model for Sensitive Questions

The problem is in the estimation of the fraction of population with a st...
02/18/2017

Sample complexity of population recovery

The problem of population recovery refers to estimating a distribution b...
12/23/2019

Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices

The first part of this paper is devoted to the decision-theoretic analys...
12/15/2020

Minimax Risk and Uniform Convergence Rates for Nonparametric Dyadic Regression

Let i=1,…,N index a simple random sample of units drawn from some large ...
04/07/2021

Near-optimal estimation of the unseen under regularly varying tail populations

Given n samples from a population of individuals belonging to different ...
10/22/2018

A minimax near-optimal algorithm for adaptive rejection sampling

Rejection Sampling is a fundamental Monte-Carlo method. It is used to sa...

1 Introduction

Protection against disclosure is a legal and ethical obligation for agencies releasing microdata files for public use. Any decision about release requires a careful assessment of the risk of disclosure, which is supported by the estimation of measures of disclosure risk (e.g., Willenborg and de Waal [27]). Consider a microdata sample from a finite population of size , and without loss of generality assume that each sample record contains two disjoint types of information for the

-th individual: identifying information and sensitive information. Identifying information consists of the values of a set of categorical variables, which might be matchable to known units of the population. A risk of disclosure arises from the possibility that an intruder might succeed in identifying a microdata unit through such a matching and hence be able to disclose the sensitive information on this unit. To quantify the risk of disclosure, microdata sample records are cross-classified according to potentially identifying variables, i.e.,

is partitioned in cells with corresponding frequency counts such that , where denotes the frequency of the -th cell out of the sample . A risk of disclosure arises from cells in which both sample frequencies and population frequencies are small. Of special interest are cells with frequency (singletons or uniques) since, assuming no errors in the matching process or data sources, for these cells the match is guaranteed to be correct. This has motivated inference on measures of disclosure risk that are functionals of the number of singletons, the most common being the number of sample singletons which are also population singletons. See, e.g., Bethlehem et al. [2] and Skinner et al. [23] for a thorough discussion on measures of disclosure risk.

The Poisson abundance model is arguably the most natural, and weak, modeling assumption to infer . If , with , it assumes that: i) the population records can be ideally extended to a sequence , of which is an observable subsample; ii) the ’s are independent and identically distributed according to an unknown distribution , where

is the probability of the

-th cell in which

may be cross-classified; iii) the sample size is a Poisson random variable

with mean , in symbols . Then sample records result in cells with frequencies such that for , is independent of for any , and . As discussed in Section 2.4 of Skinner and Elliot [22], nonparametric estimation of under the Poisson abundance model is an intrinsically difficult problem. It shares the well-known difficulties of the classical problem of estimating the number of unseen species (e.g., Good and Toulmin [10], Efron and Thisted [8], Orlitsky et al. [17]). In particular, nonparametric estimators of

may be “very unreasonable” since they are subject to serious upward bias and high variance for small sampling fractions of the population, i.e. for

. To overcome these issues, in the last three decades stronger modeling assumptions have been considered. These studies resulted in a range of parametric and semiparametric approaches, both frequentist and Bayesian, to infer , e.g., Bethlehem et al. [2], Samuels [21], Skinner and Elliot [22], Reiter [18], Rinott and Shlomo [19], Skinner and Shlomo [24], Manrique-Vallier and Reiter [13], Manrique-Vallier and Reiter [14], Carota et al. [4] and Carota et al. [5].

In this paper, we first study nonparametric estimation of under the Poisson abundance model for sample records. Given a collection of sample records from the population , we introduce a class of nonparametric linear estimators of that are simple, computationally efficient and scalable to massive datasets. We show that our estimators admit an interpretation as (smoothed) nonparametric empirical Bayes estimators in the sense of Robbins [20], and we prove theoretical guarantees for them that hold uniformly for any distribution . In particular, we show that the proposed estimators provably estimate all of the way up to the sampling fraction , with vanishing normalized mean-square error (NMSE) as becomes large. Then, by relying on recent techniques developed in Wu and Yang [29] in the context of optimal estimation of the support size of discrete distributions, we establish a lower bound for the minimax NMSE for the estimation of . This result allows us to show that is the smallest possible sampling fraction of the population, and that estimators’ NMSE is near optimal, in the sense of matching the minimax lower bound, for large . This is the main result of our paper, and it provides a precise answer to the question raised by Skinner and Elliot [22] about the feasibility of nonparametric estimation of under the Poisson abundance model and for a sampling fraction . Indeed our result shows that nonparametric estimation of has uniformly provable guarantees, in terms of vanishing NMSE for large , if and only if .

The paper is structured as follows. In Section 2 we introduce a class of nonparametric estimators for , and we show that they provably estimate all of the way up to the sampling fraction , with vanishing NMSE as becomes large. In Section 3 we show that is the smallest possible sampling fraction of the population, and that estimators’ NMSE is near optimal for large . Section 4 contains a numerical illustration of the proposed estimators. Proofs and deferred to the Appendix.

2 A nonparametric estimator of τ1

We consider an infinite sequence of observations , and we assume that is the microdata sample of random size under the Poisson abundance model. We suppose that is a subsample of , where , with and independent of . In the present framework may be seen as the unobservable population. When the sample records are cross-classified according to the potentially identifying variables, the sample is partitioned in cells with corresponding frequency counts such that . Hereafter we denote by the number of cells with frequency , and by the number of cells with frequency greater or equal than , for any index . We are interest in estimating the number of sample uniques which are also population uniques, namely the functional

 τ1(X,N,M)=∑j≥11{Yj(X,N)=1}1{Yj(X,N+M)=1}.

We recall that the frequency counts

’s are independent, and that they are Poisson distributed with parameter

, where is the unknown probability associated to the -th cell, that is for such that . We will denote by the whole sequence of the cell’s frequency count, when we are provided with a sample of size .

To fix the notation, in the sequel we will write , for two generic functions and , iff there exists a universal constant such that ; we will further write whenever both and are satisfied. Let us denote by the set of all possible distributions over , i.e. , where denotes the Dirac measure centered at . An estimator of is understood to be a measurable function depending on the available sample and the actual size of the observed sample . We will evaluate the performance of a generic estimator of , by its worst–case NMSE, defined as

 Eλ,n(^ρ1(X(N),N)):=supP∈PE[(^ρ1(X(N),N)−τ1(X,N,M))2]n2, (1)

where is the mean squared error (MSE) of , also denoted by . The use of the NMSE (1) has been recently proposed in Orlitsky et al. [17] in the context of the estimation of the number of unseen species.

A nonparametric estimator for may be deduced comparing expectations, indeed it is easy to see that:

 (2)

from which we may define the following estimator

 ^τ1(X(N),N)=∑i≥0(−1)i(i+1)λiZi+1(X,N), (3)

which turns out to be unbiased by construction. See Appendix A.1 for the determination of (2). The estimator admits a natural interpretation as a nonparametric empirical Bayes estimator in the sense of Robbins [20]. More precisely, is the posterior expectation of with respect to an unknown prior distribution on the ’s that is estimated from the . See Appendix A.2 for details. The next theorem legitimates the use of as an estimator of , for , i.e. when the size of the unobserved population is less or equal than , the size of the observed sample.

Theorem 1.

For any positive real numbers and let denote the integer part of and let denote the maximum between and . If , for any , we get

 E[^τ1(X(N),N)]=E[τ1(X,N,M)]=∑j≥1npje−(λ+1)npj (4)

and

 Var[τ1(X,N,M)−^τ1(X(N),N)]≤Ψ2(λ)E[Z¯1(X,N)]−E[Z1(X,N+M)]λ+1, (5)

where in (5) we defined such that .

See Appendix A.3 for the proof of Theorem 1. According to Theorem 1, for one has and upon noticing that . That is, in expectation, approximate to within . Hence we formalize our considerations in the following.

Corollary 1.

Assume that , then the nonparametric estimator defined in (3) satisfies

 Eλ,n(^τ1(X(N),N))≲1n (6)

for any .

This legitimates the use of as an estimator of under the hypothesis , which unfortunately is a quite restrictive assumption within the framework of disclosure risk: indeed the size of the unobserved sample is usually much bigger than the size of the available one. However the derivation of a variance bound for is a crucial step for our study. Indeed it reveals that the assumption is necessary to obtain a finite estimate of the variance. This variance issue of is determined by the geometrically increasing magnitude of the coefficients . Indeed, as , the estimator grows superlinearly as for the largest such that , thus eventually far exceeding that grows at most linearly. This is the main reason why become useless for , thus requiring an adjustment via suitable smoothing techniques. Hereafter we follow ideas originally developed by Good and Toulmin [10], Efron and Thisted [8] and Orlitsky et al. [17] for nonparametric estimators of the number of unseen species. Specifically, we propose a smoothed version of by truncating the series (3) at an independent random location and averaging over the distribution of , i.e.,

 ^τL1(X(N),N) =EL[L∑i=1(−1)i(i+1)λiZi+1(X,N)] (7) =∑i≥0(−1)i(i+1)λiP(L≥i)Zi+1(X,N).

For any , as the the index in (7) increases, the tail probability compensate for the exponential growth of , thereby stabilizing the variance. In the next theorem we show that for the estimator is biased for , and we provide a bound for the MSE of .

Theorem 2.

Suppose that , then is a biased estimator of with

 E[^τL1(X(N),N)]=E[τ1(X,N,M)]+∑j≥1e−pjn(λ+1)pjn∫λnpj0esEL[(−s)LL!]ds. (8)

and

 MSE[^τL1(X(N),N)]≤(∑j≥1e−pjn(λ+1)pjn∫λnpj0esEL[(−s)LL!]ds)2+(EL[(L+1)λL])2E[Z¯1(X,N)]−E[Z1(X,N+M)]λ+1. (9)

Choosing different smoothing distributions for the random variable yields different estimators for . Following Orlitsky et al. [17], three possible choices for the distribution of are the following: i) a Poisson distribution with parameter

; ii) a Binomial distribution with parameter

; iii) a Binomial distribution with parameter . In particular, it can be shown that the choice of the Binomial distribution with parameter corresponds to the truncation at the point of the Euler transformation of the estimator (3). To choose the parameter of the Poisson distribution and the parameter of the Binomial distribution, one should look for and which minimizes the MSE bound (9). Once the values of and are determined explicitly, we are able to obtain limit of predictability for . That is, for some we are able to specify the maximum value of the sampling fraction for which . This gives a provable (performance) guarantee for the estimation of in terms of the sampling fraction . The next proposition specifies the limit of predictability for the estimator under the choice of a Poisson distribution with parameter for the smoothing distribution .

Proposition 1.

Let be a Poisson random variable with parameter . Then

 MSE[^τL1(X(N),N)]≤e−2βn2+ne2β(2λ−1) (10)

whose upper bound is minimized when

 ~β=14λlog(n2λ−1).

for any . Moreover, if is a Poisson random variable with parameter then

 En,λ(^τL1(X(N),N))≤A(λ)n1/(2λ), (11)

and for any

 limn→+∞max{λ:En,λ(^τL1(X(N),N))≤δ}log(n)≥12log(A/δ) (12)

where is continuous in with and .

See Appendix A.5 for the proof of Proposition 1. Similar results hold true when is assumed to be a Binomial random variable: the derivation of these results follows along similar lines as the proof of Proposition 1. Hence we state the following result in presence of a Binomial smoothing without proof.

Proposition 2.

Let be a Binomial random variable with parameter . Then

 (13)

whose upper bound is minimized when

 ~x0=⌊310log3(nλ2(λ+1)(λ2(310/3−1)−4λ−4))⌋

for any . Moreover, if is a Binomial random variable with parameter then

 En,λ(^τL1(X(N),N))≤C(λ)n3log3(1+2/λ)/5, (14)

and for any

 limn→+∞max{λ:En,λ(^τL1)≤δ}log(n)≥65log(3)log(C/δ) (15)

where is continuous in with and .

3 Optimality of the proposed estimators

In Section 2 we have defined two different estimators of providing guarantees of their performance as in terms of the NMSE. We have already remarked that the case is the most interesting one for estimating the disclosure risk , indeed the fraction of the unobserved sample is usually much larger than . Throughout the section we assume that and we prove that the proposed estimator is essentially optimal. More precisely we determine a lower bound for the best worst–case NMSE, defined by

 E(λ,n):=inf^ρ1Eλ,n(^ρ1(X(N),N)) (16)

where the infimum in the previous definition runs over all possible estimators of . We will then see that the determined lower bound essentially matches with the upper bound (11). In the sequel we refer to as the (normalized) minimax risk.

The theorem we are going to state below provides us with a lower bound for the minimax risk.

Theorem 3.

Assume that . Then, there exists a universal constant such that for any sufficiently big we have

 E(λ,n)≥K⋅⎧⎪ ⎪⎨⎪ ⎪⎩1if λ+1>log(n)1+λlog(n)(√log(n)n(1+λ))e2/(1+λ)if λ+1≤log(n) (17)

From Theorem 3, it is clear that the minimax risk goes to zero if and the rate is provided by the following Corollary.

Corollary 2.

Assume that , then there exist universal constants and such that for any sufficiently large

 E(λ,n)≥c1nc′/λ. (18)

Corollary 2 is an easy consequence of Theorem 3, indeed, when the two lower bounds in (17)–(18) are constants, whereas if it is easy to observe that the leading term in (17) (as ) is of order as in (18) for some . Corollary 2 provides us with a lower bound for the NMSE of any estimator of the disclosure risk . The lower bound (18) has an important implication: without imposing any parametric assumption on the model, one can estimate with vanishing NMSE all the way up to . It is then impossible to determine an estimator having provable guarantees (in terms of vanishing NMSE) when goes to much faster than , as a function of . By the limit of predictability (12) determined for the estimator , we conclude that the proposed estimator is optimal, because its limit of predictability matches (asymptotically) with its maximum possible value .

3.1 Guideline for the proof of Theorem 3

In the present section we provide the main ingredients for the proof of Theorem 3, technical results and related proofs are deferred to the Appendix. In the sequel we will write to make explicit the dependence of the expected value w.r.t. and the parameter of the Poisson random variable .

The starting point for the proof of Theorem 3 is the next Lemma 1, which is an interesting result in its own right and will help a lot in the proof of Theorem 3. Remark that the definition of the minimax risk in (16) allows for estimators depending on the whole sample , while depends only on the frequencies and . Thus, we feel like there should be no gain of information in using estimators depending on over estimators depending only on the frequencies . This is made formal in the next lemma, proved in Section B.1. Note that this is convenient since is nicely distributed under the Poisson model.

Lemma 1.

The following equality is true

 E(λ,n)=inf^ρsupP∈Pn−2EnP[(τ1(X,N,M)−^ρ(Y(X,N))2],

where the infinimum in the previous equation is understood to be taken with respect to all measurable maps .

The next step is to use Jensen’s inequality to deduce that

 E(λ,n) =inf^ρsupP∈Pn−2EnP[EnP[(τ1(X,N,M)−^ρ(Y(X,N)))2∣Y(X,N)]] ≥inf^ρsupP∈Pn−2EnP[(EnP[τ1(X,N,M)∣Y(X,N)]−^ρ(Y(X,N)))2]

Note that there is no explicit dependency on and anymore in the last display, but only on the random variable which, under

, is distributed as an infinite vector of independent Poisson random variables with parameters

. Besides observe also that . For the sake of notational simplicity, in the sequel will stand for the random variable , and we also let

 ~τ1(Y,P,n) :=EnP[τ1(X,N,M)∣Y(X,N)] =∑j≥11{Yj(X,N)=1}EnP[1{Yj(X,N+M)−Yj(X,N)=0}∣Y(X,N)].

Remark that is independent of and is a collection of independent Poisson random variables with intensities . Henceforth, we get

 ~τ1(Y,P,n)=∑j≥1e−λnpj1{Yj(X,N)=1}, (19)

and besides

 (20)

We now trade for its expectation. Let us introduce . Recall that under the vector is distributed as independent Poisson with parameters . Hence,

 ¯τ1(P,n)=∑j≥1e−λnpjEnP[1{Yj(X,N)=1}]=n∑j≥1pje−(1+λ)npj.

Similarly, for any ,

 Var(~τ1(Y,P,n))=∑j≥1npje−(1+2λ)npj{1−npje−npj}≤n. (21)

Thus from (20), Young’s inequality, we find that

 E(λ,n) ≥12inf^ρsupP∈Pn−2EnP[(¯τ1(P,n)−^ρ(Y))2]−n−1. (22)

The remainder of the proof mostly follows the reduction scheme used in Wu and Yang [28, 29] which consists on reducing the problem to finding the best polynomial approximation (in uniform norm) to a suitable function.

The first step of the reduction scheme is to trade in (16) for a slightly more convenient set. We let be the only integer satisfying

 n(1+λ)≤S≤n(1+λ)+1, (23)

and we also let, for some constant to be determined later,

 ξ:=(2c0/e)min{(1+λ)logn,log2n}. (24)

Then, for another constant and for to be determined later, we define

 P′:={∑Sk=1pkδk:pk∈[0,ξS−1], |∑Sk=1pk−1|≤c1ε/ξ}. (25)

Remark that contains measures that are not probability measures, and hence it is not clear a priori that we can lower bound the supremum over by the supremum over . The next proposition shows that it is fine as long as is not too large. Here and after, under , is understood as a vector of independent Poisson random variables with intensities , with not necessarily equal to one, and is extended trivially from to by letting , . The next proposition is proved in Section B.2.

Proposition 3.

Assume that as and let define . Then as ,

 E(λ,n) ≥14inf^ρsupP∈P′n−2En′P[(¯τ1(P,n)−^ρ(Y))2]−(n−1+9c21ε2/2) ≥ε24{inf^ρsupP∈P′Pn′P(|¯τ1(P,n)−^ρ(Y)|>nε)−18c21}−n−1.

We are now in position to lower bound the risk by the Bayes risk. To do so, we follow the prior construction of Wu and Yang [28, 29]. For some to be determined later, but satisfying for some constant , we let and be two random variables taking values in such that when is large enough,

 E[Uk]=E[Vk]∀k∈{0,…,L+1}, E[U]=E[V]=S−1,Var(U)≤ξS−2,Var(V)≤ξS−2, E[Ue−n(1+λ)U]≥E[Ve−n(1+λ)V]+S−1Kmin{1,√ξ/L2exp(−L2/ξ)}

The existence of such random variables is guaranteed by Theorem C.1 for a universal constant . Then we let , respectively , be an independent vector of i.i.d. copies of , respectively . Denoted by the space of all measures on , we construct the following random variable such that . Then, from the Proposition 3 and Hölder’s inequality, we find that is bounded from below by plus

 ε24{inf^ρ(12E[Pn′Q(U)(|¯τ1(Q(U),n)−^ρ(Y)|>nε)1P′(Q(U))]+12E[Pn′Q(V)(|¯τ1(Q(V),n)−^ρ(Y)|>nε)1P′(Q(V))])−18c21},

which is in turn lower bounded by

 ε24{inf^ρ(12E[Pn′Q(U)(|¯τ1(Q(U),n)−^ρ(Y)|>nε)]+12E[Pn′Q(V)(|¯τ1(Q(V),n)−^ρ(Y)|>nε)])−18c21−12P(Q(U)∉P′)−12P(Q(V)∉P′)}−n−1.

The last display follows because we don’t have nor almost-surely in

, but it is clear that the strong law of large numbers implies they should be concentrated on

. Formally, an application of Bernstein’s inequality (see Section B.3 below) leads to the following proposition.

Proposition 4.

Assume that as . Then, there exists a constant , depending only on , such that for large enough,

Thus under the conditions of Proposition 4, we get

 E(λ,n)≥ε24{inf^ρ(12E[Pn′Q(U)(|¯τ1(Q(U),n)−^ρ(Y)|>nε)]+12E[Pn′Q(V)(|¯τ1(Q(V),n)−^ρ(Y)|>nε)])−19c21}−n−1. (26)

We now wish to trade and for their expectations in the last equation. Intuitively, this should not be problematic since they are sums of i.i.d. random variables, they should concentrate near their expectations for large enough. We made this formal using a Hoeffding argument in the next proposition, proved in Section B.4.

Proposition 5.

Let everything as above. Then,

 ε2≥2ξlog(1/c21)n(1+λ)⟹P(|¯τ1(Q(U),n)−E[¯τ1(Q(U),n)]|>nε/2)≤2c21.

Obviously, Proposition 5 is also true for . We now assume that the conditions of Propositions 3, 4 and 5 are met, and thus we obtain from (26) that

 E(λ,n)≥ε24{inf^ρ(12E[Pn′Q(U)(|E[¯τ1(Q(U),n)]−^ρ(Y)|>nε/2)]+12E[Pn′Q(V)(|E[¯τ1(Q(V),n)]−^ρ(Y)|>nε/2)])−21c21}−n−1.

Now remark that,

 E[¯τ1(Q(U),n)]=nSE[Uexp{−n(1+λ)U}],

and besides observe that whenever is large enough, we will have . We furthermore assume that satisfies

 max{8n