GPU Acceleration of Sparse Neural Networks

05/09/2020
by   Aavaas Gajurel, et al.
0

In this paper, we use graphics processing units(GPU) to accelerate sparse and arbitrary structured neural networks. Sparse networks have nodes in the network that are not fully connected with nodes in preceding and following layers, and arbitrary structure neural networks have different number of nodes in each layers. Sparse Neural networks with arbitrary structures are generally created in the processes like neural network pruning and evolutionary machine learning strategies. We show that we can gain significant speedup for full activation of such neural networks using graphical processing units. We do a prepossessing step to determine dependency groups for all the nodes in a network, and use that information to guide the progression of activation in the neural network. Then we compute activation for each nodes in its own separate thread in the GPU, which allows for massive parallelization. We use CUDA framework to implement our approach and compare the results of sequential and GPU implementations. Our results show that the activation of sparse neural networks lends very well to GPU acceleration and can help speed up machine learning strategies which generate such networks or other processes that have similar structure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/21/2018

Neural Networks with Activation Networks

This work presents an adaptive activation method for neural networks tha...
research
06/15/2023

Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network

EIE proposed to accelerate pruned and compressed neural networks, exploi...
research
03/26/2021

Explore the Knowledge contained in Network Weights to Obtain Sparse Neural Networks

Sparse neural networks are important for achieving better generalization...
research
07/26/2018

A Unified Approximation Framework for Deep Neural Networks

Deep neural networks (DNNs) have achieved significant success in a varie...
research
04/02/2019

On Geometric Structure of Activation Spaces in Neural Networks

In this paper, we investigate the geometric structure of activation spac...
research
01/21/2019

Partition Pruning: Parallelization-Aware Pruning for Deep Neural Networks

Parameters of recent neural networks require a huge amount of memory. Th...
research
02/17/2023

Highly connected dynamic artificial neural networks

An object-oriented approach to implementing artificial neural networks i...

Please sign up or login with your details

Forgot password? Click here to reset