## I Introduction

With machine learning algorithms increasingly being placed in more complex, real world settings, there has been a renewed interest in continuous games [mertikopoulos:2019aa, zhang:2010aa, mazumdar:2018aa], and particularly zero-sum continuous games [mazumdar:2019aa, daskalakis:2018aa, goodfellow:2014aa, jin:2019aa]. Adversarial learning [daskalakis:2017aa, mertikopoulos:2018aa]

, robust reinforcement learning

[li:2019aa, pinto:2017aa], and generative adversarial networks [goodfellow:2014aa] all make use of zero-sum games played on highly non-convex functions to achieve remarkable results.Though progress is being made, a theoretical understanding of the equilibria of such games is lacking.
In particular, many of the approaches to learning equilibria in these machine learning applications are gradient-based.
For instance, consider an adversarial learning setting where the goal is to learn a model or network by optimizing a function over where is chosen by an adversary. A general approach to this problem is to study the coupled learning dynamics that arise when one *player* is descending and the other is ascending it---e.g.,

Comments

There are no comments yet.