Learning discrete distributions: user vs item-level privacy

07/27/2020 ∙ by Yuhan Liu, et al. ∙ 5 ∙

Much of the literature on differential privacy focuses on item-level privacy, where loosely speaking, the goal is to provide privacy per item or training example. However, recently many practical applications such as federated learning require preserving privacy for all items of a single user, which is much harder to achieve. Therefore understanding the theoretical limit of user-level privacy becomes crucial. We study the fundamental problem of learning discrete distributions over k symbols with user-level differential privacy. If each user has m samples, we show that straightforward applications of Laplace or Gaussian mechanisms require the number of users to be 𝒊(k/(mÎą^2) + k/ÏĩÎą) to achieve an ℓ_1 distance of Îą between the true and estimated distributions, with the privacy-induced penalty k/ÏĩÎą independent of the number of samples per user m. Moreover, we show that any mechanism that only operates on the final aggregate should require a user complexity of the same order. We then propose a mechanism such that the number of users scales as 𝒊Ėƒ(k/(mÎą^2) + k/√(m)ÏĩÎą) and further show that it is nearly-optimal under certain regimes. Thus the privacy penalty is 𝒊(√(m)) times smaller compared to the standard mechanisms. We also propose general techniques for obtaining lower bounds on restricted differentially private estimators and a lower bound on the total variation between binomial distributions, both of which might be of independent interest.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.