ALTO: Adaptive Linearized Storage of Sparse Tensors

02/20/2021
by   Ahmed E. Helal, et al.
0

The analysis of high-dimensional sparse data is becoming increasingly popular in many important domains. However, real-world sparse tensors are challenging to process due to their irregular shapes and data distributions. We propose the Adaptive Linearized Tensor Order (ALTO) format, a novel mode-agnostic (general) representation that keeps neighboring nonzero elements in the multi-dimensional space close to each other in memory. To generate the indexing metadata, ALTO uses an adaptive bit encoding scheme that trades off index computations for lower memory usage and more effective use of memory bandwidth. Moreover, by decoupling its sparse representation from the irregular spatial distribution of nonzero elements, ALTO eliminates the workload imbalance and greatly reduces the synchronization overhead of tensor computations. As a result, the parallel performance of ALTO-based tensor operations becomes a function of their inherent data reuse. On a gamut of tensor datasets, ALTO outperforms an oracle that selects the best state-of-the-art format for each dataset, when used in key tensor decomposition operations. Specifically, ALTO achieves a geometric mean speedup of 8X over the best mode-agnostic (coordinate and hierarchical coordinate) formats, while delivering a geometric mean compression ratio of 4.3X relative to the best mode-specific (compressed sparse fiber) formats.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 11

01/29/2022

Efficient, Out-of-Memory Sparse MTTKRP on Massively Parallel Architectures

Tensor decomposition (TD) is an important method for extracting latent i...
03/24/2022

DPar2: Fast and Scalable PARAFAC2 Decomposition for Irregular Dense Tensors

Given an irregular dense tensor, how can we efficiently analyze it? An i...
04/06/2019

Load-Balanced Sparse MTTKRP on GPUs

Sparse matricized tensor times Khatri-Rao product (MTTKRP) is one of the...
09/18/2020

GrateTile: Efficient Sparse Tensor Tiling for CNN Processing

We propose GrateTile, an efficient, hardwarefriendly data storage scheme...
01/02/2020

A Parallel Sparse Tensor Benchmark Suite on CPUs and GPUs

Tensor computations present significant performance challenges that impa...
10/23/2019

SMASH: Co-designing Software Compression and Hardware-Accelerated Indexing for Efficient Sparse Matrix Operations

Important workloads, such as machine learning and graph analytics applic...
12/14/2018

Parallel Sparse Tensor Decomposition in Chapel

In big-data analytics, using tensor decomposition to extract patterns fr...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.