Sample Amplification: Increasing Dataset Size even when Learning is Impossible

04/26/2019
by   Brian Axelrod, et al.
0

Given data drawn from an unknown distribution, D, to what extent is it possible to "amplify" this dataset and output an even larger set of samples that appear to have been drawn from D? We formalize this question as follows: an (n,m) amplification procedure takes as input n independent draws from an unknown distribution D, and outputs a set of m > n "samples". An amplification procedure is valid if no algorithm can distinguish the set of m samples produced by the amplifier from a set of m independent draws from D, with probability greater than 2/3. Perhaps surprisingly, in many settings, a valid amplification procedure exists, even when the size of the input dataset, n, is significantly less than what would be necessary to learn D to non-trivial accuracy. Specifically we consider two fundamental settings: the case where D is an arbitrary discrete distribution supported on < k elements, and the case where D is a d-dimensional Gaussian with unknown mean, and fixed covariance. In the first case, we show that an (n, n + Θ(n/√(k))) amplifier exists. In particular, given n=O(√(k)) samples from D, one can output a set of m=n+1 datapoints, whose total variation distance from the distribution of m i.i.d. draws from D is a small constant, despite the fact that one would need quadratically more data, n=Θ(k), to learn D up to small constant total variation distance. In the Gaussian case, we show that an (n,n+Θ(n/√(d) )) amplifier exists, even though learning the distribution to small constant total variation distance requires Θ(d) samples. In both the discrete and Gaussian settings, we show that these results are tight, to constant factors. Beyond these results, we formalize a number of curious directions for future research along this vein.

READ FULL TEXT
research
10/06/2018

Total variation distance for discretely observed Lévy processes: a Gaussian approximation of the small jumps

It is common practice to treat small jumps of Lévy processes as Wiener n...
research
11/21/2022

Estimating the Effective Support Size in Constant Query Complexity

Estimating the support size of a distribution is a well-studied problem ...
research
11/09/2018

Density estimation for shift-invariant multidimensional distributions

We study density estimation for classes of shift-invariant distributions...
research
10/03/2022

Online Pen Testing

We study a "pen testing" problem, in which we are given n pens with unkn...
research
02/23/2023

Beyond Moments: Robustly Learning Affine Transformations with Asymptotically Optimal Error

We present a polynomial-time algorithm for robustly learning an unknown ...
research
09/19/2018

Exploring the Impact of Password Dataset Distribution on Guessing

Leaks from password datasets are a regular occurrence. An organization m...
research
02/20/2021

Efficient Learning of Non-Interacting Fermion Distributions

We give an efficient classical algorithm that recovers the distribution ...

Please sign up or login with your details

Forgot password? Click here to reset