Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel Optimization

05/05/2021
by   Zhishuai Guo, et al.
0

In this paper, we consider non-convex stochastic bilevel optimization (SBO) problems that have many applications in machine learning. Although numerous studies have proposed stochastic algorithms for solving these problems, they are limited in two perspectives: (i) their sample complexities are high, which do not match the state-of-the-art result for non-convex stochastic optimization; (ii) their algorithms are tailored to problems with only one lower-level problem. When there are many lower-level problems, it could be prohibitive to process all these lower-level problems at each iteration. To address these limitations, this paper proposes fast randomized stochastic algorithms for non-convex SBO problems. First, we present a stochastic method for non-convex SBO with only one lower problem and establish its sample complexity of O(1/ϵ^3) for finding an ϵ-stationary point under appropriate conditions, matching the lower bound for stochastic smooth non-convex optimization. Second, we present a randomized stochastic method for non-convex SBO with m>1 lower level problems by processing only one lower problem at each iteration, and establish its sample complexity no worse than O(m/ϵ^3), which could have a better complexity than simply processing all m lower problems at each iteration. To the best of our knowledge, this is the first work considering SBO with many lower level problems and establishing state-of-the-art sample complexity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2022

Optimal Algorithms for Stochastic Multi-Level Compositional Optimization

In this paper, we investigate the problem of stochastic multi-level comp...
research
10/11/2022

Stochastic Constrained DRO with a Complexity Independent of Sample Size

Distributionally Robust Optimization (DRO), as a popular method to train...
research
05/30/2023

Blockwise Stochastic Variance-Reduced Methods with Parallel Speedup for Multi-Block Bilevel Optimization

In this paper, we consider non-convex multi-block bilevel optimization (...
research
05/26/2023

Optimizing NOTEARS Objectives via Topological Swaps

Recently, an intriguing class of non-convex optimization problems has em...
research
11/01/2022

Optimal Complexity in Non-Convex Decentralized Learning over Time-Varying Networks

Decentralized optimization with time-varying networks is an emerging par...
research
06/21/2023

Optimal Algorithms for Stochastic Bilevel Optimization under Relaxed Smoothness Conditions

Stochastic Bilevel optimization usually involves minimizing an upper-lev...
research
08/21/2023

A Homogenization Approach for Gradient-Dominated Stochastic Optimization

Gradient dominance property is a condition weaker than strong convexity,...

Please sign up or login with your details

Forgot password? Click here to reset