
Space Lower Bounds for Graph Stream Problems
This work concerns with proving space lower bounds for graph problems in...
read it

Tight Dimension Independent Lower Bound on Optimal Expected Convergence Rate for Diminishing Step Sizes in SGD
We study convergence of Stochastic Gradient Descent (SGD) for strongly c...
read it

The Convergence Rate of SGD's Final Iterate: Analysis on Dimension Dependence
Stochastic Gradient Descent (SGD) is among the simplest and most popular...
read it

The aBc Problem and Equator Sampling Renyi Divergences
We investigate the problem of approximating the product a^TBc, where a,c...
read it

How Good is SGD with Random Shuffling?
We study the performance of stochastic gradient descent (SGD) on smooth ...
read it

On the streaming complexity of fundamental geometric problems
In this paper, we focus on lower bounds and algorithms for some basic ge...
read it

How to trap a gradient flow
We consider the problem of finding an εapproximate stationary point of ...
read it
Streaming Complexity of SVMs
We study the space complexity of solving the biasregularized SVM problem in the streaming model. This is a classic supervised learning problem that has drawn lots of attention, including for developing fast algorithms for solving the problem approximately. One of the most widely used algorithms for approximately optimizing the SVM objective is Stochastic Gradient Descent (SGD), which requires only O(1/λϵ) random samples, and which immediately yields a streaming algorithm that uses O(d/λϵ) space. For related problems, better streaming algorithms are only known for smooth functions, unlike the SVM objective that we focus on in this work. We initiate an investigation of the space complexity for both finding an approximate optimum of this objective, and for the related “point estimation” problem of sketching the data set to evaluate the function value F_λ on any query (θ, b). We show that, for both problems, for dimensions d=1,2, one can obtain streaming algorithms with space polynomially smaller than 1/λϵ, which is the complexity of SGD for strongly convex functions like the biasregularized SVM, and which is known to be tight in general, even for d=1. We also prove polynomial lower bounds for both point estimation and optimization. In particular, for point estimation we obtain a tight bound of Θ(1/√(ϵ)) for d=1 and a nearly tight lower bound of Ω(d/ϵ^2) for d = Ω( log(1/ϵ)). Finally, for optimization, we prove a Ω(1/√(ϵ)) lower bound for d = Ω( log(1/ϵ)), and show similar bounds when d is constant.
READ FULL TEXT
Comments
There are no comments yet.