Never Go Full Batch (in Stochastic Convex Optimization)

06/29/2021
by   Idan Amir, et al.
0

We study the generalization performance of full-batch optimization algorithms for stochastic convex optimization: these are first-order methods that only access the exact gradient of the empirical risk (rather than gradients with respect to individual data points), that include a wide range of algorithms such as gradient descent, mirror descent, and their regularized and/or accelerated variants. We provide a new separation result showing that, while algorithms such as stochastic gradient descent can generalize and optimize the population risk to within ϵ after O(1/ϵ^2) iterations, full-batch methods either need at least Ω(1/ϵ^4) iterations or exhibit a dimension-dependent sample complexity.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset