How deep is deep enough? - Optimizing deep neural network architecture

11/05/2018
by   Achim Schilling, et al.
0

Deep neural networks use stacked layers of feature detectors to repeatedly transform the input data, so that structurally different classes of input become well separated in the final layer. While the method has turned out extremely powerful in many applications, its success depends critically on the correct choice of hyperparameters, in particular the number of network layers. Here, we introduce a new measure, called the generalized discrimination value (GDV), which quantifies how well different object classes separate in each layer. Due to its definition, the GDV is invariant to translation and scaling of the input data, independent of the number of features, as well as independent of the number and permutation of the neurons within a layer. We compute the GDV in each layer of a Deep Belief Network that was trained unsupervised on the MNIST data set. Strikingly, we find that the GDV first improves with each successive network layer, but then gets worse again beyond layer 30, thus indicating the optimal network depth for this data classification task. Our further investigations suggest that the GDV can serve as a universal tool to determine the optimal number of layers in deep neural networks for any type of input data.

READ FULL TEXT
research
12/14/2016

Permutation-equivariant neural networks applied to dynamics prediction

The introduction of convolutional layers greatly advanced the performanc...
research
10/16/2018

Deep Neural Maps

We introduce a new unsupervised representation learning and visualizatio...
research
11/23/2021

Critical initialization of wide and deep neural networks through partial Jacobians: general theory and applications to LayerNorm

Deep neural networks are notorious for defying theoretical treatment. Ho...
research
07/09/2019

Characterizing Inter-Layer Functional Mappings of Deep Learning Models

Deep learning architectures have demonstrated state-of-the-art performan...
research
05/20/2017

Forward Thinking: Building Deep Random Forests

The success of deep neural networks has inspired many to wonder whether ...
research
12/14/2016

Deep Function Machines: Generalized Neural Networks for Topological Layer Expression

In this paper we propose a generalization of deep neural networks called...
research
06/08/2021

The Randomness of Input Data Spaces is an A Priori Predictor for Generalization

Over-parameterized models can perfectly learn various types of data dist...

Please sign up or login with your details

Forgot password? Click here to reset