Identity Matters in Deep Learning

11/14/2016
by   Moritz Hardt, et al.
0

An emerging design principle in deep learning is that each layer of a deep artificial neural network should be able to easily express the identity transformation. This idea not only motivated various normalization techniques, such as batch normalization, but was also key to the immense success of residual networks. In this work, we put the principle of identity parameterization on a more solid theoretical footing alongside further empirical progress. We first give a strikingly simple proof that arbitrarily deep linear residual networks have no spurious local optima. The same result for linear feed-forward networks in their standard parameterization is substantially more delicate. Second, we show that residual networks with ReLu activations have universal finite-sample expressivity in the sense that the network can represent any function of its sample provided that the model has more parameters than the sample size. Directly inspired by our theory, we experiment with a radically simple residual architecture consisting of only residual convolutional layers and ReLu activations, but no batch normalization, dropout, or max pool. Our model improves significantly on previous all-convolutional networks on the CIFAR10, CIFAR100, and ImageNet classification benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2020

Optimization Theory for ReLU Neural Networks Trained with Normalization Layers

The success of deep neural networks is in part due to the use of normali...
research
04/23/2018

Decorrelated Batch Normalization

Batch Normalization (BN) is capable of accelerating the training of deep...
research
12/31/2018

Deep Residual Learning in the JPEG Transform Domain

We introduce a general method of performing Residual Network inference a...
research
02/11/2019

On Residual Networks Learning a Perturbation from Identity

The purpose of this work is to test and study the hypothesis that residu...
research
12/23/2022

An Exact Mapping From ReLU Networks to Spiking Neural Networks

Deep spiking neural networks (SNNs) offer the promise of low-power artif...
research
10/26/2020

MarbleNet: Deep 1D Time-Channel Separable Convolutional Neural Network for Voice Activity Detection

We present MarbleNet, an end-to-end neural network for Voice Activity De...
research
10/26/2021

Gradient representations in ReLU networks as similarity functions

Feed-forward networks can be interpreted as mappings with linear decisio...

Please sign up or login with your details

Forgot password? Click here to reset