Cost-efficient Gaussian Tensor Network Embeddings for Tensor-structured Inputs

05/26/2022
by   Linjian Ma, et al.
0

This work discusses tensor network embeddings, which are random matrices (S) with tensor network structure. These embeddings have been used to perform dimensionality reduction of tensor network structured inputs x and accelerate applications such as tensor decomposition and kernel regression. Existing works have designed embeddings for inputs x with specific structures, such that the computational cost for calculating Sx is efficient. We provide a systematic way to design tensor network embeddings consisting of Gaussian random tensors, such that for inputs with more general tensor network structures, both the sketch size (row size of S) and the sketching computational cost are low. We analyze general tensor network embeddings that can be reduced to a sequence of sketching matrices. We provide a sufficient condition to quantify the accuracy of such embeddings and derive sketching asymptotic cost lower bounds using embeddings that satisfy this condition and have a sketch size lower than any input dimension. We then provide an algorithm to efficiently sketch input data using such embeddings. The sketch size of the embedding used in the algorithm has a linear dependence on the number of sketching dimensions of the input. Assuming tensor contractions are performed with classical dense matrix multiplication algorithms, this algorithm achieves asymptotic cost within a factor of O(√(m)) of our cost lower bound, where m is the sketch size. Further, when each tensor in the input has a dimension that needs to be sketched, this algorithm yields the optimal sketching asymptotic cost. We apply our sketching analysis to inexact tensor decomposition optimization algorithms. We provide a sketching algorithm for CP decomposition that is asymptotically faster than existing work in multiple regimes, and show optimality of an existing algorithm for tensor train rounding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2021

Efficient Tensor Contraction via Fast Count Sketch

Sketching uses randomized Hash functions for dimensionality reduction an...
research
06/22/2021

Lower and Upper Bounds on the VC-Dimension of Tensor Network Models

Tensor network methods have been a key ingredient of advances in condens...
research
10/22/2020

Efficient parallel CP decomposition with pairwise perturbation and multi-sweep dimension tree

CP tensor decomposition with alternating least squares (ALS) is dominate...
research
02/28/2022

On the Robustness of CountSketch to Adaptive Inputs

CountSketch is a popular dimensionality reduction technique that maps ve...
research
06/14/2022

Permutation Search of Tensor Network Structures via Local Sampling

Recent works put much effort into tensor network structure search (TN-SS...
research
07/21/2022

Communication Lower Bounds and Optimal Algorithms for Multiple Tensor-Times-Matrix Computation

Multiple Tensor-Times-Matrix (Multi-TTM) is a key computation in algorit...
research
07/14/2017

Communication Lower Bounds of Bilinear Algorithms for Symmetric Tensor Contractions

Accurate numerical calculations of electronic structure are often domina...

Please sign up or login with your details

Forgot password? Click here to reset