Learning Sparse Low-Threshold Linear Classifiers

12/13/2012
by   Sivan Sabato, et al.
0

We consider the problem of learning a non-negative linear classifier with a 1-norm of at most k, and a fixed threshold, under the hinge-loss. This problem generalizes the problem of learning a k-monotone disjunction. We prove that we can learn efficiently in this setting, at a rate which is linear in both k and the size of the threshold, and that this is the best possible rate. We provide an efficient online learning algorithm that achieves the optimal rate, and show that in the batch case, empirical risk minimization achieves this rate as well. The rates we show are tighter than the uniform convergence rate, which grows with k^2.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/03/2019

Anytime Online-to-Batch Conversions, Optimism, and Acceleration

A standard way to obtain convergence guarantees in stochastic convex opt...
research
09/08/2023

Online Infinite-Dimensional Regression: Learning Linear Operators

We consider the problem of learning linear operators under squared loss ...
research
12/11/2018

Efficient learning of smooth probability functions from Bernoulli tests with guarantees

We study the fundamental problem of learning an unknown, smooth probabil...
research
05/12/2018

Do Outliers Ruin Collaboration?

We consider the problem of learning a binary classifier from n different...
research
02/01/2013

Sparse Multiple Kernel Learning with Geometric Convergence Rate

In this paper, we study the problem of sparse multiple kernel learning (...
research
06/28/2021

Scalable Optimal Classifiers for Adversarial Settings under Uncertainty

We consider the problem of finding optimal classifiers in an adversarial...
research
01/28/2021

Interpolating Classifiers Make Few Mistakes

This paper provides elementary analyses of the regret and generalization...

Please sign up or login with your details

Forgot password? Click here to reset