Regret minimization in stochastic non-convex learning via a proximal-gradient approach

10/13/2020
by   Nadav Hallak, et al.
0

Motivated by applications in machine learning and operations research, we study regret minimization with stochastic first-order oracle feedback in online constrained, and possibly non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach for first-order methods, so we focus on a local regret measure defined via a proximal-gradient mapping. To achieve no (local) regret in this setting, we develop a prox-grad method based on stochastic first-order feedback, and a simpler method for when access to a perfect first-order oracle is possible. Both methods are min-max order-optimal, and we also establish a bound on the number of prox-grad queries these methods require. As an important application of our results, we also obtain a link between online and offline non-convex stochastic optimization manifested as a new prox-grad scheme with complexity guarantees matching those obtained via variance reduction techniques.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/31/2017

Efficient Regret Minimization in Non-Convex Games

We consider regret minimization in repeated games with non-convex loss f...
research
08/15/2023

Projection-Free Methods for Stochastic Simple Bilevel Optimization with Convex Lower-level Problem

In this paper, we study a class of stochastic bilevel optimization probl...
research
03/09/2020

Robust Learning from Discriminative Feature Feedback

Recent work introduced the model of learning from discriminative feature...
research
09/27/2018

On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA

Non-convex optimization with global convergence guarantees is gaining si...
research
04/17/2018

Two-Player Games for Efficient Non-Convex Constrained Optimization

In recent years, constrained optimization has become increasingly releva...
research
06/14/2022

Lazy Queries Can Reduce Variance in Zeroth-order Optimization

A major challenge of applying zeroth-order (ZO) methods is the high quer...
research
01/21/2020

SA vs SAA for population Wasserstein barycenter calculation

In Machine Learning and Optimization community there are two main approa...

Please sign up or login with your details

Forgot password? Click here to reset