DeepAI AI Chat
Log In Sign Up

Characterizing Sparse Connectivity Patterns in Neural Networks

by   Sourya Dey, et al.
University of Southern California

We propose a novel way of reducing the number of parameters in the storage-hungry fully connected classification layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training. Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5 connection density, or less than 5 investigate the effects of pre-defining the sparsity of networks with only fully connected layers. Based on our sparsifying technique, we introduce the `scatter' metric to characterize the quality of a particular connection pattern. As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.


page 1

page 2

page 3

page 4


Schizophrenia-mimicking layers outperform conventional neural network layers

We have reported nanometer-scale three-dimensional studies of brain netw...

How far can we go without convolution: Improving fully-connected networks

We propose ways to improve the performance of fully connected networks. ...

Householder-Absolute Neural Layers For High Variability and Deep Trainability

We propose a new architecture for artificial neural networks called Hous...

Optimal Connectivity through Network Gradients for the Restricted Boltzmann Machine

Leveraging sparse networks to connect successive layers in deep neural n...

A Deeper Look into Convolutions via Pruning

Convolutional neural networks (CNNs) are able to attain better visual re...

Sparse Super-Regular Networks

It has been argued by Thom and Palm that sparsely-connected neural netwo...