DeepAI AI Chat
Log In Sign Up

Efficient Gradient Approximation Method for Constrained Bilevel Optimization

02/03/2023
by   Siyuan Xu, et al.
0

Bilevel optimization has been developed for many machine learning tasks with large-scale and high-dimensional data. This paper considers a constrained bilevel optimization problem, where the lower-level optimization problem is convex with equality and inequality constraints and the upper-level optimization problem is non-convex. The overall objective function is non-convex and non-differentiable. To solve the problem, we develop a gradient-based approach, called gradient approximation method, which determines the descent direction by computing several representative gradients of the objective function inside a neighborhood of the current estimate. We show that the algorithm asymptotically converges to the set of Clarke stationary points, and demonstrate the efficacy of the algorithm by the experiments on hyperparameter optimization and meta-learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/24/2021

Integrated Conditional Estimation-Optimization

Many real-world optimization problems involve uncertain parameters with ...
02/22/2021

Slowly Varying Regression under Sparsity

We consider the problem of parameter estimation in slowly varying regres...
06/15/2021

A Value-Function-based Interior-point Method for Non-convex Bi-level Optimization

Bi-level optimization model is able to capture a wide range of complex l...
06/24/2020

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

Owing to their statistical properties, non-convex sparse regularizers ha...
09/19/2022

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

Bilevel optimization (BO) is useful for solving a variety of important m...
10/25/2018

Truncated Back-propagation for Bilevel Optimization

Bilevel optimization has been recently revisited for designing and analy...