-
On Solving Minimax Optimization Locally: A Follow-the-Ridge Approach
Many tasks in modern machine learning can be formulated as finding equil...
read it
-
The Landscape of Nonconvex-Nonconcave Minimax Optimization
Minimax optimization has become a central tool for modern machine learni...
read it
-
Zeroth-Order Algorithms for Nonconvex Minimax Problems with Improved Complexities
In this paper, we study zeroth-order algorithms for minimax optimization...
read it
-
Relaxed Gauss-Newton methods with applications to electrical impedance tomography
As second-order methods, Gauss–Newton-type methods can be more effective...
read it
-
Newton-based Policy Optimization for Games
Many learning problems involve multiple agents optimizing different inte...
read it
-
Efficient Implementation Of Newton-Raphson Methods For Sequential Data Prediction
We investigate the problem of sequential linear data prediction for real...
read it
-
An Evaluation of Two Alternatives to Minimax
In the field of Artificial Intelligence, traditional approaches to choos...
read it
Newton-type Methods for Minimax Optimization
Differential games, in particular two-player sequential games (a.k.a. minimax optimization), have been an important modelling tool in applied science and received renewed interest in machine learning due to many recent applications. To account for the sequential and nonconvex nature, new solution concepts and algorithms have been developed. In this work, we provide a detailed analysis of existing algorithms and relate them to two novel Newton-type algorithms. We argue that our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to (strict) local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify our theoretical results by conducting experiments on training GANs.
READ FULL TEXT
Comments
There are no comments yet.