Statistically Near-Optimal Hypothesis Selection

08/17/2021
by   Olivier Bousquet, et al.
4

Hypothesis Selection is a fundamental distribution learning problem where given a comparator-class Q={q_1,…, q_n} of distributions, and a sampling access to an unknown target distribution p, the goal is to output a distribution q such that 𝖳𝖵(p,q) is close to opt, where opt = min_i{𝖳𝖵(p,q_i)} and 𝖳𝖵(·, ·) denotes the total-variation distance. Despite the fact that this problem has been studied since the 19th century, its complexity in terms of basic resources, such as number of samples and approximation guarantees, remains unsettled (this is discussed, e.g., in the charming book by Devroye and Lugosi `00). This is in stark contrast with other (younger) learning settings, such as PAC learning, for which these complexities are well understood. We derive an optimal 2-approximation learning strategy for the Hypothesis Selection problem, outputting q such that 𝖳𝖵(p,q) ≤2 · opt +, with a (nearly) optimal sample complexity of Õ(log n/ϵ^2). This is the first algorithm that simultaneously achieves the best approximation factor and sample complexity: previously, Bousquet, Kane, and Moran (COLT `19) gave a learner achieving the optimal 2-approximation, but with an exponentially worse sample complexity of Õ(√(n)/ϵ^2.5), and Yatracos (Annals of Statistics `85) gave a learner with optimal sample complexity of O(log n /ϵ^2) but with a sub-optimal approximation factor of 3.

READ FULL TEXT
research
05/30/2019

Private Hypothesis Selection

We provide a differentially private algorithm for hypothesis selection. ...
research
10/22/2022

On-Demand Sampling: Learning Optimally from Multiple Distributions

Social and real-world considerations such as robustness, fairness, socia...
research
02/12/2018

On the Sample Complexity of Learning from a Sequence of Experiments

We analyze the sample complexity of a new problem: learning from a seque...
research
02/10/2019

The Optimal Approximation Factor in Density Estimation

Consider the following problem: given two arbitrary densities q_1,q_2 an...
research
06/17/2022

Learning a Single Neuron with Adversarial Label Noise via Gradient Descent

We study the fundamental problem of learning a single neuron, i.e., a fu...
research
02/23/2018

Fast and Sample Near-Optimal Algorithms for Learning Multidimensional Histograms

We study the problem of robustly learning multi-dimensional histograms. ...
research
01/27/2023

AdaBoost is not an Optimal Weak to Strong Learner

AdaBoost is a classic boosting algorithm for combining multiple inaccura...

Please sign up or login with your details

Forgot password? Click here to reset