Learning Entangled Single-Sample Distributions via Iterative Trimming

04/20/2020
by   Hui Yuan, et al.
0

In the setting of entangled single-sample distributions, the goal is to estimate some common parameter shared by a family of distributions, given one single sample from each distribution. We study mean estimation and linear regression under general conditions, and analyze a simple and computationally efficient method based on iteratively trimming samples and re-estimating the parameter on the trimmed sample set. We show that the method in logarithmic iterations outputs an estimation whose error only depends on the noise level of the α n-th noisiest data point where α is a constant and n is the sample size. This means it can tolerate a constant fraction of high-noise points. These are the first such results under our general conditions with computationally efficient estimators. It also justifies the wide application and empirical success of iterative trimming in practice. Our theoretical results are complemented by experiments on synthetic data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2020

Learning Entangled Single-Sample Gaussians in the Subset-of-Signals Model

In the setting of entangled single-sample distributions, the goal is to ...
research
02/13/2023

Trimmed sample means for robust uniform mean estimation and regression

It is well-known that trimmed sample means are robust against heavy tail...
research
07/06/2019

Estimating location parameters in entangled single-sample distributions

We consider the problem of estimating the common mean of independently s...
research
10/06/2021

Robust Generalized Method of Moments: A Finite Sample Viewpoint

For many inference problems in statistics and econometrics, the unknown ...
research
10/01/2022

Pitfalls of Gaussians as a noise distribution in NCE

Noise Contrastive Estimation (NCE) is a popular approach for learning pr...
research
07/15/2020

Sketching for Two-Stage Least Squares Estimation

When there is so much data that they become a computation burden, it is ...
research
11/26/2019

Learning sparse linear dynamic networks in a hyper-parameter free setting

We address the issue of estimating the topology and dynamics of sparse l...

Please sign up or login with your details

Forgot password? Click here to reset