Block-wise Dynamic Sparseness

01/14/2020
by   Amir Hadifar, et al.
5

Neural networks have achieved state of the art performance across a wide variety of machine learning tasks, often with large and computation-heavy models. Inducing sparseness as a way to reduce the memory and computation footprint of these models has seen significant research attention in recent years. In this paper, we present a new method for dynamic sparseness, whereby part of the computations are omitted dynamically, based on the input. For efficiency, we combined the idea of dynamic sparseness with block-wise matrix-vector multiplications. In contrast to static sparseness, which permanently zeroes out selected positions in weight matrices, our method preserves the full network capabilities by potentially accessing any trained weights. Yet, matrix vector multiplications are accelerated by omitting a pre-defined fraction of weight blocks from the matrix, based on the input. Experimental results on the task of language modeling, using recurrent and quasi-recurrent models, show that the proposed method can outperform a magnitude-based static sparseness baseline. In addition, our method achieves similar language modeling perplexities as the dense baseline, at half the computational cost at inference time.

READ FULL TEXT

page 5

page 6

research
04/26/2019

Think Again Networks, the Delta Loss, and an Application in Language Modeling

This short paper introduces an abstraction called Think Again Networks (...
research
10/15/2018

Trellis Networks for Sequence Modeling

We present trellis networks, a new architecture for sequence modeling. O...
research
11/04/2016

Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling

Recurrent neural networks have been very successful at predicting sequen...
research
08/27/2018

Predefined Sparseness in Recurrent Sequence Models

Inducing sparseness while training neural networks has been shown to yie...
research
06/17/2019

Structured Pruning of Recurrent Neural Networks through Neuron Selection

Recurrent neural networks (RNNs) have recently achieved remarkable succe...
research
12/15/2019

Fast DenseNet: Towards Efficient and Accurate Text Recognition with Fast Dense Networks

Convolutional Recurrent Neural Network (CRNN) is a popular network for r...
research
11/10/2022

Cherry Hypothesis: Identifying the Cherry on the Cake for Dynamic Networks

Dynamic networks have been extensively explored as they can considerably...

Please sign up or login with your details

Forgot password? Click here to reset