DeepAI AI Chat
Log In Sign Up

Efficient Gradient Approximation Method for Constrained Bilevel Optimization

by   Siyuan Xu, et al.

Bilevel optimization has been developed for many machine learning tasks with large-scale and high-dimensional data. This paper considers a constrained bilevel optimization problem, where the lower-level optimization problem is convex with equality and inequality constraints and the upper-level optimization problem is non-convex. The overall objective function is non-convex and non-differentiable. To solve the problem, we develop a gradient-based approach, called gradient approximation method, which determines the descent direction by computing several representative gradients of the objective function inside a neighborhood of the current estimate. We show that the algorithm asymptotically converges to the set of Clarke stationary points, and demonstrate the efficacy of the algorithm by the experiments on hyperparameter optimization and meta-learning.


page 1

page 2

page 3

page 4


Integrated Conditional Estimation-Optimization

Many real-world optimization problems involve uncertain parameters with ...

Slowly Varying Regression under Sparsity

We consider the problem of parameter estimation in slowly varying regres...

A Value-Function-based Interior-point Method for Non-convex Bi-level Optimization

Bi-level optimization model is able to capture a wide range of complex l...

Provably Convergent Working Set Algorithm for Non-Convex Regularized Regression

Owing to their statistical properties, non-convex sparse regularizers ha...

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

Bilevel optimization (BO) is useful for solving a variety of important m...

Truncated Back-propagation for Bilevel Optimization

Bilevel optimization has been recently revisited for designing and analy...