Robust stochastic optimization with the proximal point method

07/31/2019
by   Damek Davis, et al.
0

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only polylogarithmic in the condition number and the confidence level. We discuss consequences both for streaming and offline algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset