Algorithm to Compilation Co-design: An Integrated View of Neural Network Sparsity

06/16/2021
by   Fu-Ming Guo, et al.
6

Reducing computation cost, inference latency, and memory footprint of neural networks are frequently cited as research motivations for pruning and sparsity. However, operationalizing those benefits and understanding the end-to-end effect of algorithm design and regularization on the runtime execution is not often examined in depth. Here we apply structured and unstructured pruning to attention weights of transformer blocks of the BERT language model, while also expanding block sparse representation (BSR) operations in the TVM compiler. Integration of BSR operations enables the TVM runtime execution to leverage structured pattern sparsity induced by model regularization. This integrated view of pruning algorithms enables us to study relationships between modeling decisions and their direct impact on sparsity-enhanced execution. Our main findings are: 1) we validate that performance benefits of structured sparsity block regularization must be enabled by the BSR augmentations to TVM, with 4x speedup relative to vanilla PyTorch and 2.2x speedup relative to standard TVM compilation (without expanded BSR support). 2) for BERT attention weights, the end-to-end optimal block sparsity shape in this CPU inference context is not a square block (as in <cit.>) but rather a linear 32x1 block 3) the relationship between performance and block size / shape is is suggestive of how model regularization parameters interact with task scheduler optimizations resulting in the observed end-to-end performance.

READ FULL TEXT

page 3

page 6

research
07/20/2020

Achieving Real-Time Execution of 3D Convolutional Neural Networks on Mobile Devices

Mobile devices are becoming an important carrier for deep learning tasks...
research
11/08/2017

Block-Sparse Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are used in state-of-the-art models in ...
research
05/30/2021

MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Models

Pruning is an effective method to reduce the memory footprint and comput...
research
05/31/2021

1×N Block Pattern for Network Sparsity

Though network sparsity emerges as a promising direction to overcome the...
research
06/24/2020

Ramanujan Bipartite Graph Products for Efficient Block Sparse Neural Networks

Sparse neural networks are shown to give accurate predictions competitiv...
research
10/21/2020

Adaptive Structured Sparse Network for Efficient CNNs with Feature Regularization

Neural networks have made great progress in pixel to pixel image process...
research
11/03/2021

Weight, Block or Unit? Exploring Sparsity Tradeoffs for Speech Enhancement on Tiny Neural Accelerators

We explore network sparsification strategies with the aim of compressing...

Please sign up or login with your details

Forgot password? Click here to reset