STen: Productive and Efficient Sparsity in PyTorch

04/15/2023
by   Andrei Ivanov, et al.
0

As deep learning models grow, sparsity is becoming an increasingly critical component of deep neural networks, enabling improved performance and reduced storage. However, existing frameworks offer poor support for sparsity. Specialized sparsity engines focus exclusively on sparse inference, while general frameworks primarily focus on sparse tensors in classical formats and neglect the broader sparsification pipeline necessary for using sparse models, especially during training. Further, existing frameworks are not easily extensible: adding a new sparse tensor format or operator is challenging and time-consuming. To address this, we propose STen, a sparsity programming model and interface for PyTorch, which incorporates sparsity layouts, operators, and sparsifiers, in an efficient, customizable, and extensible framework that supports virtually all sparsification methods. We demonstrate this by developing a high-performance grouped n:m sparsity layout for CPU inference at moderate sparsity. STen brings high performance and ease of use to the ML community, making sparsity easily accessible.

READ FULL TEXT

page 7

page 8

page 9

research
01/26/2023

SparDA: Accelerating Dynamic Sparse Deep Neural Networks via Sparse-Dense Transformation

Due to its high cost-effectiveness, sparsity has become the most importa...
research
10/12/2022

Optimizing Tensor Programs on Flexible Storage

Tensor programs often need to process large tensors (vectors, matrices, ...
research
07/11/2022

SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning

Sparse tensors are rapidly becoming critical components of modern deep l...
research
02/09/2022

Compiler Support for Sparse Tensor Computations in MLIR

Sparse tensors arise in problems in science, engineering, machine learni...
research
06/07/2021

Top-KAST: Top-K Always Sparse Training

Sparse neural networks are becoming increasingly important as the field ...
research
03/13/2023

∇SD: Differentiable Programming for Sparse Tensors

Sparse tensors are prevalent in many data-intensive applications, yet ex...
research
07/27/2021

Griffin: Rethinking Sparse Optimization for Deep Learning Architectures

This paper examines the design space trade-offs of DNNs accelerators aim...

Please sign up or login with your details

Forgot password? Click here to reset