Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity

03/09/2020
by   Pritish Kamath, et al.
0

We present and study approximate notions of dimensional and margin complexity, which correspond to the minimal dimension or norm of an embedding required to approximate, rather then exactly represent, a given hypothesis class. We show that such notions are not only sufficient for learning using linear predictors or a kernel, but unlike the exact variants, are also necessary. Thus they are better suited for discussing limitations of linear or kernel methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

03/02/2018

Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations

The 1-norm was proven to be a good convex regularizer for the recovery o...
07/21/2020

On the Rademacher Complexity of Linear Hypothesis Sets

Linear predictors form a rich class of hypotheses used in a variety of l...
12/16/2019

Kernel-based interpolation at approximate Fekete points

We construct approximate Fekete point sets for kernel-based interpolatio...
05/09/2022

Exponential tractability of L_2-approximation with function values

We study the complexity of high-dimensional approximation in the L_2-nor...
08/29/2019

Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin

We study the problem of properly learning large margin halfspaces in th...
06/09/2021

Polynomial magic! Hermite polynomials for private data generation

Kernel mean embedding is a useful tool to compare probability measures. ...
07/13/2020

Approximate Vertex Enumeration

The problem to compute a V-polytope which is close to a given H-polytope...