High Performance Rearrangement and Multiplication Routines for Sparse Tensor Arithmetic

02/07/2018
by   Adam P. Harrison, et al.
0

Researchers are increasingly incorporating numeric high-order data, i.e., numeric tensors, within their practice. Just like the matrix/vector (MV) paradigm, the development of multi-purpose, but high-performance, sparse data structures and algorithms for arithmetic calculations, e.g., those found in Einstein-like notation, is crucial for the continued adoption of tensors. We use the example of high-order differential operators to illustrate this need. As sparse tensor arithmetic is an emerging research topic, with challenges distinct from the MV paradigm, many aspects require further articulation. We focus on three core facets. First, aligning with prominent voices in the field, we emphasise the importance of data structures able to accommodate the operational complexity of tensor arithmetic. However, we describe a linearised coordinate (LCO) data structure that provides faster and more memory-efficient sorting performance. Second, flexible data structures, like the LCO, rely heavily on sorts and permutations. We introduce an innovative permutation algorithm, based on radix sort, that is tailored to rearrange already-sorted sparse data, producing significant performance gains. Third, we introduce a novel poly-algorithm for sparse tensor products, where hyper-sparsity is a possibility. Different manifestations of hyper-sparsity demand their own approach, which our poly-algorithm is the first to provide. These developments are incorporated within our LibNT and NTToolbox software libraries. Benchmarks, frequently drawn from the high-order differential operators example, demonstrate the practical impact of our routines, with speed-ups of 40 higher compared to alternative high-performance implementations. Comparisons against the MATLAB Tensor Toolbox show over 10 times speed improvements. Thus, these advancements produce significant practical improvements for sparse tensor arithmetic.

READ FULL TEXT
research
01/09/2018

Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures

Sparse Matrix-Matrix multiplication is a key kernel that has application...
research
12/02/2021

Dynamic Sparse Tensor Algebra Compilation

This paper shows how to generate efficient tensor algebra code that comp...
research
05/18/2022

High-Order Multilinear Discriminant Analysis via Order-n Tensor Eigendecomposition

Higher-order data with high dimensionality is of immense importance in m...
research
11/14/2022

Complete Decomposition of Symmetric Tensors in Linear Time and Polylogarithmic Precision

We study symmetric tensor decompositions, i.e. decompositions of the inp...
research
07/07/2021

Tensor Methods in Computer Vision and Deep Learning

Tensors, or multidimensional arrays, are data structures that can natura...
research
01/19/2017

High order cumulant tensors and algorithm for their calculation

In this paper, we introduce a novel algorithm for calculating arbitrary ...
research
02/21/2020

Inverted-File k-Means Clustering: Performance Analysis

This paper presents an inverted-file k-means clustering algorithm (IVF) ...

Please sign up or login with your details

Forgot password? Click here to reset