BiAdam: Fast Adaptive Bilevel Optimization Methods

06/21/2021
by   Feihu Huang, et al.
0

Bilevel optimization recently has attracted increased interest in machine learning due to its many applications such as hyper-parameter optimization and policy optimization. Although some methods recently have been proposed to solve the bilevel problems, these methods do not consider using adaptive learning rates. To fill this gap, in the paper, we propose a class of fast and effective adaptive methods for solving bilevel optimization problems that the outer problem is possibly nonconvex and the inner problem is strongly-convex. Specifically, we propose a fast single-loop BiAdam algorithm based on the basic momentum technique, which achieves a sample complexity of Õ(ϵ^-4) for finding an ϵ-stationary point. At the same time, we propose an accelerated version of BiAdam algorithm (VR-BiAdam) by using variance reduced technique, which reaches the best known sample complexity of Õ(ϵ^-3). To further reduce computation in estimating derivatives, we propose a fast single-loop stochastic approximated BiAdam algorithm (saBiAdam) by avoiding the Hessian inverse, which still achieves a sample complexity of Õ(ϵ^-4) without large batches. We further present an accelerated version of saBiAdam algorithm (VR-saBiAdam), which also reaches the best known sample complexity of Õ(ϵ^-3). We apply the unified adaptive matrices to our methods as the SUPER-ADAM <cit.>, which including many types of adaptive learning rates. Moreover, our framework can flexibly use the momentum and variance reduced techniques. In particular, we provide a useful convergence analysis framework for both the constrained and unconstrained bilevel optimization. To the best of our knowledge, we first study the adaptive bilevel optimization methods with adaptive learning rates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/30/2021

AdaGDA: Faster Adaptive Gradient Descent Ascent Methods for Minimax Optimization

In the paper, we propose a class of faster adaptive gradient descent asc...
research
07/26/2021

Enhanced Bilevel Optimization via Bregman Distance

Bilevel optimization has been widely applied many machine learning probl...
research
06/23/2021

Bregman Gradient Policy Optimization

In this paper, we design a novel Bregman gradient policy optimization fr...
research
07/13/2020

Momentum-Based Policy Gradient Methods

In the paper, we propose a class of efficient momentum-based policy grad...
research
12/09/2021

A Fully Single Loop Algorithm for Bilevel Optimization without Hessian Inverse

In this paper, we propose a new Hessian inverse free Fully Single Loop A...
research
08/18/2023

Breaking the Complexity Barrier in Compositional Minimax Optimization

Compositional minimax optimization is a pivotal yet under-explored chall...
research
06/15/2021

SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients

Adaptive gradient methods have shown excellent performance for solving m...

Please sign up or login with your details

Forgot password? Click here to reset