L1 Regression with Lewis Weights Subsampling

05/19/2021
by   Aditya Parulekar, et al.
0

We consider the problem of finding an approximate solution to ℓ_1 regression while only observing a small number of labels. Given an n × d unlabeled data matrix X, we must choose a small set of m ≪ n rows to observe the labels of, then output an estimate β whose error on the original problem is within a 1 + ε factor of optimal. We show that sampling from X according to its Lewis weights and outputting the empirical minimizer succeeds with probability 1-δ for m > O(1/ε^2 d logd/εδ). This is analogous to the performance of sampling according to leverage scores for ℓ_2 regression, but with exponentially better dependence on δ. We also give a corresponding lower bound of Ω(d/ε^2 + (d + 1/ε^2) log1/δ).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset