Performance of the low-rank tensor-train SVD (TT-SVD) for large dense tensors on modern multi-core CPUs

01/29/2021
by   Melven Röhrig-Zöllner, et al.
0

There are several factorizations of multi-dimensional tensors into lower-dimensional components, known as `tensor networks'. We consider the popular `tensor-train' (TT) format and ask: How efficiently can we compute a low-rank approximation from a full tensor on current multi-core CPUs? Compared to sparse and dense linear algebra, kernel libraries for multi-linear algebra are rare and typically not as well optimized. Linear algebra libraries like BLAS and LAPACK may provide the required operations in principle, but often at the cost of additional data movements for rearranging memory layouts. Furthermore, these libraries are typically optimized for the compute-bound case (e.g. square matrix operations) whereas low-rank tensor decompositions lead to memory bandwidth limited operations. We propose a `tensor-train singular value decomposition' (TT-SVD) algorithm based on two building blocks: a `Q-less tall-skinny QR' factorization, and a fused tall-skinny matrix-matrix multiplication and reshape operation. We analyze the performance of the resulting TT-SVD algorithm using the Roofline performance model. In addition, we present performance results for different algorithmic variants for shared-memory as well as distributed-memory architectures. Our experiments show that commonly used TT-SVD implementations suffer severe performance penalties. We conclude that a dedicated library for tensor factorization kernels would benefit the community: Computing a low-rank approximation can be as cheap as reading the data twice from main memory. As a consequence, an implementation that achieves realistic performance will move the limit at which one has to resort to randomized methods that only process part of the data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2017

Computing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD

We propose a new algorithm for the computation of a singular value decom...
research
04/19/2020

Tensor Completion via a Low-Rank Approximation Pursuit

This paper considers the completion problem for a tensor (also referred ...
research
10/26/2021

Rademacher Random Projections with Tensor Networks

Random projection (RP) have recently emerged as popular techniques in th...
research
04/10/2023

Mixed-Precision Random Projection for RandNLA on Tensor Cores

Random projection can reduce the dimension of data while capturing its s...
research
08/17/2022

Distributed Out-of-Memory SVD on CPU/GPU Architectures

We propose an efficient, distributed, out-of-memory implementation of th...
research
08/26/2021

H2OPUS-TLR: High Performance Tile Low Rank Symmetric Factorizations using Adaptive Randomized Approximation

Tile low rank representations of dense matrices partition them into bloc...
research
05/17/2019

Randomized algorithms for low-rank tensor decompositions in the Tucker format

Many applications in data science and scientific computing involve large...

Please sign up or login with your details

Forgot password? Click here to reset