Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians

10/26/2020
by   Juhan Bae, et al.
1

Hyperparameter optimization of neural networks can be elegantly formulated as a bilevel optimization problem. While research on bilevel optimization of neural networks has been dominated by implicit differentiation and unrolling, hypernetworks such as Self-Tuning Networks (STNs) have recently gained traction due to their ability to amortize the optimization of the inner objective. In this paper, we diagnose several subtle pathologies in the training of STNs. Based on these observations, we propose the Δ-STN, an improved hypernetwork architecture which stabilizes training and optimizes hyperparameters much more efficiently than STNs. The key idea is to focus on accurately approximating the best-response Jacobian rather than the full best-response function; we achieve this by reparameterizing the hypernetwork and linearizing the network around the current parameters. We demonstrate empirically that our Δ-STN can tune regularization hyperparameters (e.g. weight decay, dropout, number of cutout holes) with higher accuracy, faster convergence, and improved stability compared to existing approaches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2019

Self-Tuning Networks: Bilevel Optimization of Hyperparameters using Structured Best-Response Functions

Hyperparameter optimization can be formulated as a bilevel optimization ...
research
12/30/2021

Self-tuning networks:

Hyperparameter optimization can be formulated as a bilevel optimization ...
research
12/11/2022

CPMLHO:Hyperparameter Tuning via Cutting Plane and Mixed-Level Optimization

The hyperparameter optimization of neural network can be expressed as a ...
research
07/05/2023

Implicit Differentiation for Hyperparameter Tuning the Weighted Graphical Lasso

We provide a framework and algorithm for tuning the hyperparameters of t...
research
12/20/2018

Calibrating Lévy Process from Observations Based on Neural Networks and Automatic Differentiation with Convergence Proofs

The Lévy process has been widely applied to mathematical finance, quantu...
research
05/04/2021

Implicit differentiation for fast hyperparameter selection in non-smooth convex learning

Finding the optimal hyperparameters of a model can be cast as a bilevel ...
research
01/26/2019

A Practical Bandit Method with Advantages in Neural Network Tuning

Stochastic bandit algorithms can be used for challenging non-convex opti...

Please sign up or login with your details

Forgot password? Click here to reset