Coresets for Classification – Simplified and Strengthened

06/08/2021
by   Tung Mai, et al.
0

We give relative error coresets for training linear classifiers with a broad class of loss functions, including the logistic loss and hinge loss. Our construction achieves (1±ϵ) relative error with Õ(d ·μ_y(X)^2/ϵ^2) points, where μ_y(X) is a natural complexity measure of the data matrix X ∈ℝ^n × d and label vector y ∈{-1,1}^n, introduced in by Munteanu et al. 2018. Our result is based on subsampling data points with probabilities proportional to their ℓ_1 Lewis weights. It significantly improves on existing theoretical bounds and performs well in practice, outperforming uniform subsampling along with other importance sampling methods. Our sampling distribution does not depend on the labels, so can be used for active learning. It also does not depend on the specific loss function, so a single coreset can be used in multiple training scenarios.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset