Zeroth-Order Algorithms for Smooth Saddle-Point Problems

09/21/2020
by   Abdurakhmon Sadiev, et al.
0

In recent years, the importance of saddle-point problems in machine learning has increased. This is due to the popularity of GANs. In this paper, we solve stochastic smooth (strongly) convex-concave saddle-point problems using zeroth-order oracles. Theoretical analysis shows that in the case when the optimization set is a simplex, we lose only log n times in the stochastic convergence term. The paper also provides an approach to solving saddle-point problems, when the oracle for one of the variables has zero order, and for the second - first order. Subsequently, we implement zeroth-order and 1/2th-order methods to solve practical problems.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2021

Finding Second-Order Stationary Point for Nonconvex-Strongly-Concave Minimax Problem

We study the smooth minimax optimization problem of the form min_ xmax_ ...
research
03/29/2021

Saddle Point Optimization with Approximate Minimization Oracle

A major approach to saddle point optimization min_xmax_y f(x, y) is a gr...
research
10/25/2016

Frank-Wolfe Algorithms for Saddle Point Problems

We extend the Frank-Wolfe (FW) optimization algorithm to solve constrain...
research
10/25/2020

Local SGD for Saddle-Point Problems

GAN is one of the most popular and commonly used neural network models. ...
research
05/12/2020

Gradient-Free Methods for Saddle-Point Problem

In the paper, we generalize the approach Gasnikov et. al, 2017, which al...
research
02/11/2022

Distributed saddle point problems for strongly concave-convex functions

In this paper, we propose GT-GDA, a distributed optimization method to s...

Please sign up or login with your details

Forgot password? Click here to reset