Robust Learning of Discrete Distributions from Batches

11/19/2019
by   Alon Orlitsky, et al.
0

Let d be the lowest L_1 distance to which a k-symbol distribution p can be estimated from m batches of n samples each, when up to β m batches may be adversarial. For β<1/2, Qiao and Valiant (2017) showed that d=Ω(β/√(n)) and requires m=Ω(k/β^2) batches. For β<1/900, they provided a d and m order-optimal algorithm that runs in time exponential in k. For β<0.5, we propose an algorithm with comparably optimal d and m, but run-time polynomial in k and all other parameters.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2022

Dimension-Free Noninteractive Simulation from Gaussian Sources

Let X and Y be two real-valued random variables. Let (X_1,Y_1),(X_2,Y_2)...
research
02/16/2021

Nominal Unification and Matching of Higher Order Expressions with Recursive Let

A sound and complete algorithm for nominal unification of higher-order e...
research
09/27/2022

An O(3.82^k) Time FPT Algorithm for Convex Flip Distance

Let P be a convex polygon in the plane, and let T be a triangulation of ...
research
07/16/2018

Improving the smoothed complexity of FLIP for max cut problems

Finding locally optimal solutions for max-cut and max-k-cut are well-kno...
research
08/04/2020

On estimation of the PMF and the CDF of a natural discrete one parameter polynomial exponential distribution

In this article, a new natural discrete analog of the one parameter poly...
research
11/05/2019

Efficiently Learning Structured Distributions from Untrusted Batches

We study the problem, introduced by Qiao and Valiant, of learning from u...
research
06/10/2018

A note on the security of CSIDH

We propose an algorithm for computing an isogeny between two elliptic cu...

Please sign up or login with your details

Forgot password? Click here to reset