Mixtures of Gaussians are Privately Learnable with a Polynomial Number of Samples

09/07/2023
by   Mohammad Afzali, et al.
0

We study the problem of estimating mixtures of Gaussians under the constraint of differential privacy (DP). Our main result is that Õ(k^2 d^4 log(1/δ) / α^2 ε) samples are sufficient to estimate a mixture of k Gaussians up to total variation distance α while satisfying (ε, δ)-DP. This is the first finite sample complexity upper bound for the problem that does not make any structural assumptions on the GMMs. To solve the problem, we devise a new framework which may be useful for other tasks. On a high level, we show that if a class of distributions (such as Gaussians) is (1) list decodable and (2) admits a "locally small” cover [BKSW19] with respect to total variation distance, then the class of its mixtures is privately learnable. The proof circumvents a known barrier indicating that, unlike Gaussians, GMMs do not admit a locally small cover [AAL21].

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Privately Learning Mixtures of Axis-Aligned Gaussians

We consider the problem of learning mixtures of Gaussians under the cons...
research
09/02/2021

Lower Bounds on the Total Variation Distance Between Mixtures of Two Gaussians

Mixtures of high dimensional Gaussian distributions have been studied ex...
research
08/16/2022

Private Estimation with Public Data

We initiate the study of differentially private (DP) estimation with acc...
research
09/05/2023

On the Complexity of Differentially Private Best-Arm Identification with Fixed Confidence

Best Arm Identification (BAI) problems are progressively used for data-s...
research
03/28/2022

A super-polynomial lower bound for learning nonparametric mixtures

We study the problem of learning nonparametric distributions in a finite...
research
11/06/2020

Settling the Robust Learnability of Mixtures of Gaussians

This work represents a natural coalescence of two important lines of wor...
research
09/19/2013

Predictive PAC Learning and Process Decompositions

We informally call a stochastic process learnable if it admits a general...

Please sign up or login with your details

Forgot password? Click here to reset