On the Stability of Deep Networks

12/18/2014
by   Raja Giryes, et al.
0

In this work we study the properties of deep neural networks (DNN) with random weights. We formally prove that these networks perform a distance-preserving embedding of the data. Based on this we then draw conclusions on the size of the training data and the networks' structure. A longer version of this paper with more results and details can be found in (Giryes et al., 2015). In particular, we formally prove in the longer version that DNN with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/30/2015

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

Three important properties of a classification machinery are: (i) the sy...
research
05/30/2021

Embedding Principle of Loss Landscape of Deep Neural Networks

Understanding the structure of loss landscape of deep neural networks (D...
research
01/08/2019

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"

In a recently published paper [1], it is shown that deep neural networks...
research
11/12/2019

Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks

We explore the problem of selectively forgetting a particular set of dat...
research
12/24/2019

TRADI: Tracking deep neural network weight distributions

During training, the weights of a Deep Neural Network (DNN) are optimize...
research
10/18/2021

Finding Everything within Random Binary Networks

A recent work by Ramanujan et al. (2020) provides significant empirical ...

Please sign up or login with your details

Forgot password? Click here to reset