Semi-Implicit Hybrid Gradient Methods with Application to Adversarial Robustness

02/21/2022
by   Beomsu Kim, et al.
0

Adversarial examples, crafted by adding imperceptible perturbations to natural inputs, can easily fool deep neural networks (DNNs). One of the most successful methods for training adversarially robust DNNs is solving a nonconvex-nonconcave minimax problem with an adversarial training (AT) algorithm. However, among the many AT algorithms, only Dynamic AT (DAT) and You Only Propagate Once (YOPO) guarantee convergence to a stationary point. In this work, we generalize the stochastic primal-dual hybrid gradient algorithm to develop semi-implicit hybrid gradient methods (SI-HGs) for finding stationary points of nonconvex-nonconcave minimax problems. SI-HGs have the convergence rate O(1/K), which improves upon the rate O(1/K^1/2) of DAT and YOPO. We devise a practical variant of SI-HGs, and show that it outperforms other AT algorithms in terms of convergence speed and robustness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/02/2019

Efficient Algorithms for Smooth Minimax Optimization

This paper studies first order methods for solving smooth minimax optimi...
research
01/23/2020

Towards Robust DNNs: An Taylor Expansion-Based Method for Generating Powerful Adversarial Examples

Although deep neural networks (DNNs) have achieved successful applicatio...
research
04/21/2023

Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax Problems

Minimax optimization plays an important role in many machine learning ta...
research
10/14/2021

Escaping Saddle Points in Nonconvex Minimax Optimization via Cubic-Regularized Gradient Descent-Ascent

The gradient descent-ascent (GDA) algorithm has been widely applied to s...
research
02/08/2023

Decentralized Riemannian Algorithm for Nonconvex Minimax Problems

The minimax optimization over Riemannian manifolds (possibly nonconvex c...
research
09/12/2022

Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization

Nonsmooth nonconvex optimization problems broadly emerge in machine lear...
research
12/10/2021

Faster Single-loop Algorithms for Minimax Optimization without Strong Concavity

Gradient descent ascent (GDA), the simplest single-loop algorithm for no...

Please sign up or login with your details

Forgot password? Click here to reset