Energy Propagation in Deep Convolutional Neural Networks

04/12/2017
by   Thomas Wiatowski, et al.
0

Many practical machine learning tasks employ very deep convolutional neural networks. Such large depths pose formidable computational challenges in training and operating the network. It is therefore important to understand how fast the energy contained in the propagated signals (a.k.a. feature maps) decays across layers. In addition, it is desirable that the feature extractor generated by the network be informative in the sense of the only signal mapping to the all-zeros feature vector being the zero input signal. This "trivial null-space" property can be accomplished by asking for "energy conservation" in the sense of the energy in the feature vector being proportional to that of the corresponding input signal. This paper establishes conditions for energy conservation (and thus for a trivial null-space) for a wide class of deep convolutional neural network-based feature extractors and characterizes corresponding feature map energy decay rates. Specifically, we consider general scattering networks employing the modulus non-linearity and we find that under mild analyticity and high-pass conditions on the filters (which encompass, inter alia, various constructions of Weyl-Heisenberg filters, wavelets, ridgelets, (α)-curvelets, and shearlets) the feature map energy decays at least polynomially fast. For broad families of wavelets and Weyl-Heisenberg filters, the guaranteed decay rate is shown to be exponential. Moreover, we provide handy estimates of the number of layers needed to have at least ((1-ε)· 100)% of the input signal energy be contained in the feature vector.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2017

Topology Reduction in Deep Convolutional Feature Extraction Networks

Deep convolutional neural networks (CNNs) used in practice employ potent...
research
12/19/2015

A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction

Deep convolutional neural networks have led to breakthrough results in n...
research
07/27/2019

Learning Instance-wise Sparsity for Accelerating Deep Models

Exploring deep convolutional neural networks of high efficiency and low ...
research
04/21/2015

Deep Convolutional Neural Networks Based on Semi-Discrete Frames

Deep convolutional neural networks have led to breakthrough results in p...
research
10/10/2021

A Hybrid Scattering Transform for Signals with Isolated Singularities

The scattering transform is a wavelet-based model of Convolutional Neura...
research
05/09/2018

Controlling the privacy loss with the input feature maps of the layers in convolutional neural networks

We propose the method to sanitize the privacy of the IFM(Input Feature M...
research
09/29/2016

Cooperative Training of Descriptor and Generator Networks

This paper studies the cooperative training of two probabilistic models ...

Please sign up or login with your details

Forgot password? Click here to reset