Efficient Regret Minimization in Non-Convex Games

07/31/2017
by   Elad Hazan, et al.
0

We consider regret minimization in repeated games with non-convex loss functions. Minimizing the standard notion of regret is computationally intractable. Thus, we define a natural notion of regret which permits efficient optimization and generalizes offline guarantees for convergence to an approximate local optimum. We give gradient-based methods that achieve optimal regret, which in turn guarantee convergence to equilibrium in this framework.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2020

Regret minimization in stochastic non-convex learning via a proximal-gradient approach

Motivated by applications in machine learning and operations research, w...
research
05/22/2020

Online Non-convex Learning for River Pollution Source Identification

In this paper, novel gradient based online learning algorithms are devel...
research
07/28/2022

Regret Minimization and Convergence to Equilibria in General-sum Markov Games

An abundance of recent impossibility results establish that regret minim...
research
11/13/2018

A Local Regret in Nonconvex Online Learning

We consider an online learning process to forecast a sequence of outcome...
research
03/16/2016

Regret Minimization in Repeated Games: A Set-Valued Dynamic Programming Approach

The regret-minimization paradigm has emerged as an effective technique f...
research
10/07/2019

Combining No-regret and Q-learning

Counterfactual Regret Minimization (CFR) has found success in settings l...
research
09/27/2018

On the Regret Minimization of Nonconvex Online Gradient Ascent for Online PCA

Non-convex optimization with global convergence guarantees is gaining si...

Please sign up or login with your details

Forgot password? Click here to reset