DeepAI AI Chat
Log In Sign Up

DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing Hyperparameters of Deep Neural Networks

01/05/2016
by   Jie Fu, et al.
0

The performance of deep neural networks is well-known to be sensitive to the setting of their hyperparameters. Recent advances in reverse-mode automatic differentiation allow for optimizing hyperparameters with gradients. The standard way of computing these gradients involves a forward and backward pass of computations. However, the backward pass usually needs to consume unaffordable memory to store all the intermediate variables to exactly reverse the forward training procedure. In this work we propose a simple but effective method, DrMAD, to distill the knowledge of the forward pass into a shortcut path, through which we approximately reverse the training trajectory. Experiments on several image benchmark datasets show that DrMAD is at least 45 times faster and consumes 100 times less memory compared to state-of-the-art methods for optimizing hyperparameters with minimal compromise to its effectiveness. To the best of our knowledge, DrMAD is the first research attempt to make it practical to automatically tune thousands of hyperparameters of deep neural networks. The code can be downloaded from https://github.com/bigaidream-projects/drmad

READ FULL TEXT

page 1

page 2

page 3

page 4

02/06/2018

Automatic differentiation of ODE integration

We discuss the calculation of the derivatives of ODE systems with the au...
11/02/2021

Source-to-Source Automatic Differentiation of OpenMP Parallel Loops

This paper presents our work toward correct and efficient automatic diff...
04/22/2022

You Only Linearize Once: Tangents Transpose to Gradients

Automatic differentiation (AD) is conventionally understood as a family ...
06/03/2020

Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE

Neural ordinary differential equations (NODEs) have recently attracted i...
03/25/2018

Neural Nets via Forward State Transformation and Backward Loss Transformation

This article studies (multilayer perceptron) neural networks with an emp...
03/06/2017

Forward and Reverse Gradient-Based Hyperparameter Optimization

We study two procedures (reverse-mode and forward-mode) for computing th...
01/28/2020

f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation

Deep neural networks have become a mainstream approach to interactive se...

Code Repositories

drmad

Hyper-parameter Optimization with DrMAD and Hypero


view repo