Sharpened Error Bounds for Random Sampling Based ℓ_2 Regression

03/30/2014
by   Shusen Wang, et al.
0

Given a data matrix X ∈ R^n× d and a response vector y ∈ R^n, suppose n>d, it costs O(n d^2) time and O(n d) space to solve the least squares regression (LSR) problem. When n and d are both large, exactly solving the LSR problem is very expensive. When n ≫ d, one feasible approach to speeding up LSR is to randomly embed y and all columns of X into a smaller subspace R^c; the induced LSR problem has the same number of columns but much fewer number of rows, and it can be solved in O(c d^2) time and O(c d) space. We discuss in this paper two random sampling based methods for solving LSR more efficiently. Previous work showed that the leverage scores based sampling based LSR achieves 1+ϵ accuracy when c ≥ O(d ϵ^-2 d). In this paper we sharpen this error bound, showing that c = O(d d + d ϵ^-1) is enough for achieving 1+ϵ accuracy. We also show that when c ≥ O(μ d ϵ^-2 d), the uniform sampling based LSR attains a 2+ϵ bound with positive probability.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset