Over-Sampling in a Deep Neural Network

02/12/2015
by   Andrew J. R. Simpson, et al.
0

Deep neural networks (DNN) are the state of the art on many engineering problems such as computer vision and audition. A key factor in the success of the DNN is scalability - bigger networks work better. However, the reason for this scalability is not yet well understood. Here, we interpret the DNN as a discrete system, of linear filters followed by nonlinear activations, that is subject to the laws of sampling theory. In this context, we demonstrate that over-sampled networks are more selective, learn faster and learn more robustly. Our findings may ultimately generalize to the human brain.

READ FULL TEXT
research
02/13/2015

Abstract Learning via Demodulation in a Deep Neural Network

Inspired by the brain, deep neural networks (DNN) are thought to learn a...
research
01/21/2021

Ikshana: A Theory of Human Scene Understanding Mechanism

In recent years, deep neural networks achieved state-of-the-art performa...
research
04/03/2018

Analysis on the Nonlinear Dynamics of Deep Neural Networks: Topological Entropy and Chaos

The theoretical explanation for deep neural network (DNN) is still an op...
research
06/28/2020

Frequency learning for image classification

Machine learning applied to computer vision and signal processing is ach...
research
10/19/2015

Qualitative Projection Using Deep Neural Networks

Deep neural networks (DNN) abstract by demodulating the output of linear...
research
04/21/2020

How to Train your DNN: The Network Operator Edition

Deep Neural Nets have hit quite a crest, But physical networks are where...
research
08/05/2021

Mixture of Linear Models Co-supervised by Deep Neural Networks

Deep neural network (DNN) models have achieved phenomenal success for ap...

Please sign up or login with your details

Forgot password? Click here to reset