Tricks from Deep Learning

11/10/2016
by   Atilim Gunes Baydin, et al.
0

The deep learning community has devised a diverse set of methods to make gradient optimization, using large datasets, of large and highly complex models with deeply cascaded nonlinearities, practical. Taken as a whole, these methods constitute a breakthrough, allowing computational structures which are quite wide, very deep, and with an enormous number and variety of free parameters to be effectively optimized. The result now dominates much of practical machine learning, with applications in machine translation, computer vision, and speech recognition. Many of these methods, viewed through the lens of algorithmic differentiation (AD), can be seen as either addressing issues with the gradient itself, or finding ways of achieving increased efficiency using tricks that are AD-related, but not provided by current AD systems. The goal of this paper is to explain not just those methods of most relevance to AD, but also the technical constraints and mindset which led to their discovery. After explaining this context, we present a "laundry list" of methods developed by the deep learning community. Two of these are discussed in further mathematical detail: a way to dramatically reduce the size of the tape when performing reverse-mode AD on a (theoretically) time-reversible process like an ODE integrator; and a new mathematical insight that allows for the implementation of a stochastic Newton's method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/27/2018

Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator

Deep learning has seen tremendous success over the past decade in comput...
research
07/26/2016

Forward-Mode Automatic Differentiation in Julia

We present ForwardDiff, a Julia package for forward-mode automatic diffe...
research
07/26/2018

A Benchmark of Selected Algorithmic Differentiation Tools on Some Problems in Computer Vision and Machine Learning

Algorithmic differentiation (AD) allows exact computation of derivatives...
research
03/10/2020

Differentiate Everything with a Reversible Domain-Specific Language

Traditional machine instruction level reverse mode automatic differentia...
research
11/10/2016

DiffSharp: An AD Library for .NET Languages

DiffSharp is an algorithmic differentiation or automatic differentiation...
research
12/21/2022

Forward- or Reverse-Mode Automatic Differentiation: What's the Difference?

Automatic differentiation (AD) has been a topic of interest for research...
research
11/09/2021

Computing Sparse Jacobians and Hessians Using Algorithmic Differentiation

Stochastic scientific models and machine learning optimization estimator...

Please sign up or login with your details

Forgot password? Click here to reset