Newton-type Methods for Minimax Optimization

06/25/2020
by   Guojun Zhang, et al.
0

Differential games, in particular two-player sequential games (a.k.a. minimax optimization), have been an important modelling tool in applied science and received renewed interest in machine learning due to many recent applications. To account for the sequential and nonconvex nature, new solution concepts and algorithms have been developed. In this work, we provide a detailed analysis of existing algorithms and relate them to two novel Newton-type algorithms. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to (strict) local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify our theoretical results by conducting experiments on training GANs.

READ FULL TEXT
10/16/2019

On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach

Many tasks in modern machine learning can be formulated as finding equil...
02/04/2022

A J-Symmetric Quasi-Newton Method for Minimax Problems

Minimax problems have gained tremendous attentions across the optimizati...
07/22/2021

Distributed Asynchronous Policy Iteration for Sequential Zero-Sum Games and Minimax Control

We introduce a contractive abstract dynamic programming framework and re...
05/23/2022

HessianFR: An Efficient Hessian-based Follow-the-Ridge Algorithm for Minimax Optimization

Wide applications of differentiable two-player sequential games (e.g., i...
06/15/2020

The Landscape of Nonconvex-Nonconcave Minimax Optimization

Minimax optimization has become a central tool for modern machine learni...
01/19/2017

Efficient Implementation Of Newton-Raphson Methods For Sequential Data Prediction

We investigate the problem of sequential linear data prediction for real...
06/02/2021

Minimax Optimization with Smooth Algorithmic Adversaries

This paper considers minimax optimization min_x max_y f(x, y) in the cha...