Robust stochastic optimization with the proximal point method

07/31/2019
by   Damek Davis, et al.
0

Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. In this work, we show that a wide class of such algorithms on strongly convex problems can be augmented with sub-exponential confidence bounds at an overhead cost that is only polylogarithmic in the condition number and the confidence level. We discuss consequences both for streaming and offline algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/05/2019

Algorithms of Robust Stochastic Optimization Based on Mirror Descent Method

We propose an approach to construction of robust non-Euclidean iterative...
research
06/22/2021

A stochastic linearized proximal method of multipliers for convex stochastic optimization with expectation constraints

This paper considers the problem of minimizing a convex expectation func...
research
06/10/2021

Near-Optimal High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise

Thanks to their practical efficiency and random nature of the data, stoc...
research
09/13/2019

A Stochastic Proximal Point Algorithm for Saddle-Point Problems

We consider saddle point problems which objective functions are the aver...
research
06/17/2020

Nearly Optimal Robust Method for Convex Compositional Problems with Heavy-Tailed Noise

In this paper, we propose robust stochastic algorithms for solving conve...
research
02/25/2018

An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization

We consider an unconstrained problem of minimization of a smooth convex ...
research
05/26/2023

Computation of Reliability Statistics for Finite Samples of Success-Failure Experiments

Computational method for statistical measures of reliability, confidence...

Please sign up or login with your details

Forgot password? Click here to reset