Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

04/30/2015
by   Raja Giryes, et al.
0

Three important properties of a classification machinery are: (i) the system preserves the core information of the input data; (ii) the training examples convey information about unseen data; and (iii) the system is able to treat differently points from different classes. In this work we show that these fundamental properties are satisfied by the architecture of deep neural networks. We formally prove that these networks with random Gaussian weights perform a distance-preserving embedding of the data, with a special treatment for in-class and out-of-class data. Similar points at the input of the network are likely to have a similar output. The theoretical analysis of deep networks here presented exploits tools used in the compressed sensing and dictionary learning literature, thereby making a formal connection between these important topics. The derived results allow drawing conclusions on the metric learning properties of the network and their relation to its structure, as well as providing bounds on the required size of the training set such that the training examples would represent faithfully the unseen data. The results are validated with state-of-the-art trained networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/18/2014

On the Stability of Deep Networks

In this work we study the properties of deep neural networks (DNN) with ...
research
01/08/2019

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"

In a recently published paper [1], it is shown that deep neural networks...
research
02/05/2021

Hyperspherical embedding for novel class classification

Deep learning models have become increasingly useful in many different i...
research
08/25/2019

RandNet: deep learning with compressed measurements of images

Principal component analysis, dictionary learning, and auto-encoders are...
research
02/09/2018

Information Planning for Text Data

Information planning enables faster learning with fewer training example...
research
07/11/2023

Fundamental limits of overparametrized shallow neural networks for supervised learning

We carry out an information-theoretical analysis of a two-layer neural n...
research
07/14/2022

In-memory Realization of In-situ Few-shot Continual Learning with a Dynamically Evolving Explicit Memory

Continually learning new classes from a few training examples without fo...

Please sign up or login with your details

Forgot password? Click here to reset