Ramanujan Bipartite Graph Products for Efficient Block Sparse Neural Networks

06/24/2020
by   Dharma Teja Vooturi, et al.
0

Sparse neural networks are shown to give accurate predictions competitive to denser versions, while also minimizing the number of arithmetic operations performed. However current hardware like GPU's can only exploit structured sparsity patterns for better efficiency. Hence the run time of a sparse neural network may not correspond to the arithmetic operations required. In this work, we propose RBGP( Ramanujan Bipartite Graph Product) framework for generating structured multi level block sparse neural networks by using the theory of Graph products. We also propose to use products of Ramanujan graphs which gives the best connectivity for a given level of sparsity. This essentially ensures that the i.) the networks has the structured block sparsity for which runtime efficient algorithms exists ii.) the model gives high prediction accuracy, due to the better expressive power derived from the connectivity of the graph iii.) the graph data structure has a succinct representation that can be stored efficiently in memory. We use our framework to design a specific connectivity pattern called RBGP4 which makes efficient use of the memory hierarchy available on GPU. We benchmark our approach by experimenting on image classification task over CIFAR dataset using VGG19 and WideResnet-40-4 networks and achieve 5-9x and 2-5x runtime gains over unstructured and block sparsity patterns respectively, while achieving the same level of accuracy.

READ FULL TEXT

page 2

page 5

page 11

research
11/23/2017

Deep Expander Networks: Efficient Deep Networks from Graph Theory

Deep Neural Networks, while being unreasonably effective for several vis...
research
05/24/2017

Exploring the Regularity of Sparse Structure in Convolutional Neural Networks

Sparsity helps reduce the computational complexity of deep neural networ...
research
12/20/2021

Load-balanced Gather-scatter Patterns for Sparse Deep Neural Networks

Deep neural networks (DNNs) have been proven to be effective in solving ...
research
06/16/2021

Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity

Reducing computation cost, inference latency, and memory footprint of ne...
research
09/05/2023

Sparse Partitioning Around Medoids

Partitioning Around Medoids (PAM, k-Medoids) is a popular clustering tec...
research
11/07/2018

FLOPs as a Direct Optimization Objective for Learning Sparse Neural Networks

There exists a plethora of techniques for inducing structured sparsity i...
research
11/30/2021

Pixelated Butterfly: Simple and Efficient Sparse training for Neural Network Models

Overparameterized neural networks generalize well but are expensive to t...

Please sign up or login with your details

Forgot password? Click here to reset