The role of a layer in deep neural networks: a Gaussian Process perspective

02/06/2019
by   Oded Ben-David, et al.
0

A fundamental question in deep learning concerns the role played by individual layers in a deep neural network (DNN) and the transferable properties of the data representations which they learn. To the extent that layers have clear roles one should be able to optimize them separately using layer-wise loss functions. Such loss functions would describe what is the set of good data representations at each depth of the network and provide a target for layer-wise greedy optimization (LEGO). Here we introduce the Deep Gaussian Layer-wise loss functions (DGLs) which, we believe, are the first supervised layer-wise loss functions which are both explicit and competitive in terms of accuracy. The DGLs have a solid theoretical foundation, they become exact for wide DNNs, and we find that they can monitor standard end-to-end training. Being highly structured and symmetric, the DGLs provide a promising analytic route to understanding the internal representations generated by DNNs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/11/2020

Population-Based Training for Loss Function Optimization

Metalearning of deep neural network (DNN) architectures and hyperparamet...
research
01/20/2019

Training Neural Networks with Local Error Signals

Supervised training of neural networks for classification is typically p...
research
10/02/2020

Differentiable Weighted Finite-State Transducers

We introduce a framework for automatic differentiation with weighted fin...
research
01/08/2021

On the Turnpike to Design of Deep Neural Nets: Explicit Depth Bounds

It is well-known that the training of Deep Neural Networks (DNN) can be ...
research
07/24/2019

A Fine-Grained Spectral Perspective on Neural Networks

Are neural networks biased toward simple functions? Does depth always he...
research
06/09/2022

Towards Layer-wise Image Vectorization

Image rasterization is a mature technique in computer graphics, while im...
research
10/03/2022

Module-wise Training of Residual Networks via the Minimizing Movement Scheme

Greedy layer-wise or module-wise training of neural networks is compelli...

Please sign up or login with your details

Forgot password? Click here to reset