Sparse Neural Networks Topologies

06/18/2017
by   Alfred Bourely, et al.
0

We propose Sparse Neural Network architectures that are based on random or structured bipartite graph topologies. Sparse architectures provide compression of the models learned and speed-ups of computations, they can also surpass their unstructured or fully connected counterparts. As we show, even more compact topologies of the so-called SNN (Sparse Neural Network) can be achieved with the use of structured graphs of connections between consecutive layers of neurons. In this paper, we investigate how the accuracy and training speed of the models depend on the topology and sparsity of the neural network. Previous approaches using sparcity are all based on fully connected neural network models and create sparcity during training phase, instead we explicitly define a sparse architectures of connections before the training. Building compact neural network models is coherent with empirical observations showing that there is much redundancy in learned neural network models. We show experimentally that the accuracy of the models learned with neural networks depends on expander-like properties of the underlying topologies such as the spectral gap and algebraic connectivity rather than the density of the graphs of connections.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2018

Pruned and Structurally Sparse Neural Networks

Advances in designing and training deep neural networks have led to the ...
research
11/29/2020

Improving Neural Network with Uniform Sparse Connectivity

Neural network forms the foundation of deep learning and numerous AI app...
research
07/03/2019

Spatially-Coupled Neural Network Architectures

In this work, we leverage advances in sparse coding techniques to reduce...
research
06/24/2019

A Review on Neural Network Models of Schizophrenia and Autism Spectrum Disorder

This survey presents the most relevant neural network models of autism s...
research
10/16/2019

Path homologies of deep feedforward networks

We provide a characterization of two types of directed homology for full...
research
07/16/2022

Hyperparameter Tuning in Echo State Networks

Echo State Networks represent a type of recurrent neural network with a ...
research
04/27/2018

CompNet: Neural networks growing via the compact network morphism

It is often the case that the performance of a neural network can be imp...

Please sign up or login with your details

Forgot password? Click here to reset