Agnostic Distribution Learning via Compression

10/14/2017
by   Hassan Ashtiani, et al.
0

We study sample-efficient distribution learning, where a learner is given an iid sample from an unknown target distribution, and aims to approximate that distribution. Assuming the target distribution can be approximated by a member of some predetermined class of distributions, we analyze how large should a sample be, in order to be able to find a distribution that is close to the target in total variation distance. In this work, we introduce a novel method for distribution learning via a form of "compression." Having a large enough sample from a target distribution, can one compress that sample set, by picking only a few instances from it, in a way that allows recovery of (an approximation to) the target distribution from the compressed set? We prove that if this is the case for all members of a class of distributions, then there is a sample-efficient way of distribution learning for this class. As an application of our approach, we provide a sample-efficient method for agnostic distribution learning with respect to the class of mixtures of k axis-aligned Gaussian distributions over R^n. This method uses only O(kn/ϵ^2) samples (to guarantee with high probability an error of at most ϵ). This is the first sample complexity upper bound that is tight in k, n, and ϵ, up to logarithmic factors. Along the way, we prove several properties of compression schemes. Namely, we prove that if there is a compression scheme for a base class of distributions, then there is a compression scheme for the class of mixtures, as well as the products of that base class. These closure properties make compression schemes a powerful tool. For example, the problem of learning mixtures of axis-aligned Gaussians reduces to that of "robustly" compressing one-dimensional Gaussians, which we show is possible using a compressed set of constant size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2019

On the Sample Complexity of Learning Sum-Product Networks

Sum-Product Networks (SPNs) can be regarded as a form of deep graphical ...
research
01/30/2023

Compression, Generalization and Learning

A compression function is a map that slims down an observational set int...
research
06/03/2021

Privately Learning Mixtures of Axis-Aligned Gaussians

We consider the problem of learning mixtures of Gaussians under the cons...
research
08/11/2023

Private Distribution Learning with Public Data: The View from Sample Compression

We study the problem of private distribution learning with access to pub...
research
09/09/2022

Sample Complexity Bounds for Learning High-dimensional Simplices in Noisy Regimes

In this paper, we propose a sample complexity bound for learning a simpl...
research
07/07/2016

A characterization of product-form exchangeable feature probability functions

We characterize the class of exchangeable feature allocations assigning ...
research
10/03/2018

Agnostic Sample Compression for Linear Regression

We obtain the first positive results for bounded sample compression in t...

Please sign up or login with your details

Forgot password? Click here to reset