Local Nash Equilibria are Isolated, Strict Local Nash Equilibria in `Almost All' Zero-Sum Continuous Games

02/03/2020 ∙ by Eric Mazumdar, et al. ∙ berkeley college University of Washington 0

We prove that differential Nash equilibria are generic amongst local Nash equilibria in continuous zero-sum games. That is, there exists an open-dense subset of zero-sum games for which local Nash equilibria are non-degenerate differential Nash equilibria. The result extends previous results to the zero-sum setting, where we obtain even stronger results; in particular, we show that local Nash equilibria are generically hyperbolic critical points. We further show that differential Nash equilibria of zero-sum games are structurally stable. The purpose for presenting these extensions is the recent renewed interest in zero-sum games within machine learning and optimization. Adversarial learning and generative adversarial network approaches are touted to be more robust than the alternative. Zero-sum games are at the heart of such approaches. Many works proceed under the assumption of hyperbolicity of critical points. Our results justify this assumption by showing `almost all' zero-sum games admit local Nash equilibria that are hyperbolic.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

With machine learning algorithms increasingly being placed in more complex, real world settings, there has been a renewed interest in continuous games [mertikopoulos:2019aa, zhang:2010aa, mazumdar:2018aa], and particularly zero-sum continuous games [mazumdar:2019aa, daskalakis:2018aa, goodfellow:2014aa, jin:2019aa]. Adversarial learning [daskalakis:2017aa, mertikopoulos:2018aa]

, robust reinforcement learning 

[li:2019aa, pinto:2017aa], and generative adversarial networks [goodfellow:2014aa] all make use of zero-sum games played on highly non-convex functions to achieve remarkable results.

Though progress is being made, a theoretical understanding of the equilibria of such games is lacking. In particular, many of the approaches to learning equilibria in these machine learning applications are gradient-based. For instance, consider an adversarial learning setting where the goal is to learn a model or network by optimizing a function over where is chosen by an adversary. A general approach to this problem is to study the coupled learning dynamics that arise when one player is descending and the other is ascending it---e.g.,