Newton-type Methods for Minimax Optimization

by   Guojun Zhang, et al.

Differential games, in particular two-player sequential games (a.k.a. minimax optimization), have been an important modelling tool in applied science and received renewed interest in machine learning due to many recent applications. To account for the sequential and nonconvex nature, new solution concepts and algorithms have been developed. In this work, we provide a detailed analysis of existing algorithms and relate them to two novel Newton-type algorithms. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to (strict) local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify our theoretical results by conducting experiments on training GANs.


On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach

Many tasks in modern machine learning can be formulated as finding equil...

A J-Symmetric Quasi-Newton Method for Minimax Problems

Minimax problems have gained tremendous attentions across the optimizati...

Distributed Asynchronous Policy Iteration for Sequential Zero-Sum Games and Minimax Control

We introduce a contractive abstract dynamic programming framework and re...

HessianFR: An Efficient Hessian-based Follow-the-Ridge Algorithm for Minimax Optimization

Wide applications of differentiable two-player sequential games (e.g., i...

The Landscape of Nonconvex-Nonconcave Minimax Optimization

Minimax optimization has become a central tool for modern machine learni...

Efficient Implementation Of Newton-Raphson Methods For Sequential Data Prediction

We investigate the problem of sequential linear data prediction for real...

Minimax Optimization with Smooth Algorithmic Adversaries

This paper considers minimax optimization min_x max_y f(x, y) in the cha...