L1 Regression with Lewis Weights Subsampling

05/19/2021
by   Aditya Parulekar, et al.
0

We consider the problem of finding an approximate solution to ℓ_1 regression while only observing a small number of labels. Given an n × d unlabeled data matrix X, we must choose a small set of m ≪ n rows to observe the labels of, then output an estimate β whose error on the original problem is within a 1 + ε factor of optimal. We show that sampling from X according to its Lewis weights and outputting the empirical minimizer succeeds with probability 1-δ for m > O(1/ε^2 d logd/εδ). This is analogous to the performance of sampling according to leverage scores for ℓ_2 regression, but with exponentially better dependence on δ. We also give a corresponding lower bound of Ω(d/ε^2 + (d + 1/ε^2) log1/δ).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/13/2022

Online Active Regression

Active regression considers a linear regression problem where the learne...
research
03/30/2014

Sharpened Error Bounds for Random Sampling Based ℓ_2 Regression

Given a data matrix X ∈ R^n× d and a response vector y ∈ R^n, suppose n>...
research
10/10/2019

Active Learning with Importance Sampling

We consider an active learning setting where the algorithm has access to...
research
06/26/2019

Near Optimal Stratified Sampling

The performance of a machine learning system is usually evaluated by usi...
research
02/19/2018

Tail bounds for volume sampled linear regression

The n × d design matrix in a linear regression problem is given, but the...
research
05/20/2011

Adaptive and Optimal Online Linear Regression on L1-balls

We consider the problem of online linear regression on individual sequen...
research
06/01/2021

L_0 Isotonic Regression With Secondary Objectives

We provide algorithms for isotonic regression minimizing L_0 error (Hamm...

Please sign up or login with your details

Forgot password? Click here to reset