Enabling Distributed-Memory Tensor Completion in Python using New Sparse Tensor Kernels

10/06/2019
by   Zecheng Zhang, et al.
0

Tensor computations are increasingly prevalent numerical techniques in data science.However, innovation and deployment of methods on large sparse tensor datasets are made challenging by the difficulty of efficient implementation thereof.We provide a Python extension to the Cyclops tensor algebra library, which fully automates the management of distributed-memory parallelism and sparsity for NumPy-style operations on multidimensional arrays.We showcase this functionality with novel high-level implementations of three algorithms for the tensor completion problem: alternating least squares (ALS) with an implicit conjugate gradient method, stochastic gradient descent (SGD), and coordinate descent (CCD++).To make possible tensor completion for very sparse tensors, we introduce a new multi-tensor routine that is asymptotically more efficient than pairwise tensor contraction for key components of the tensor completion methods.Further, we add support for hypersparse matrix representations to Cyclops.We provide microbenchmarking results on the Stampede2 supercomputer to demonstrate the efficiency of this functionality.Finally, we study the accuracy and performance of the tensor completion methods for a synthetic tensor with 10 billion nonzeros and the Netflix dataset.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/30/2015

Sparse Tensor Algebra as a Parallel Programming Model

Dense and sparse tensors allow the representation of most bulk data stru...
research
04/05/2018

High-dimension Tensor Completion via Gradient-based Optimization Under Tensor-train Format

In this paper, we propose a novel approach to recover the missing entrie...
research
10/29/2019

DBCSR: A Blocked Sparse Tensor Algebra Library

Advanced algorithms for large-scale electronic structure calculations ar...
research
09/03/2013

Online Tensor Methods for Learning Latent Variable Models

We introduce an online tensor decomposition based approach for two laten...
research
09/20/2021

Accelerated Stochastic Gradient for Nonnegative Tensor Completion and Parallel Implementation

We consider the problem of nonnegative tensor completion. We adopt the a...
research
07/11/2023

Minimum Cost Loop Nests for Contraction of a Sparse Tensor with a Tensor Network

Sparse tensor decomposition and completion are common in numerous applic...
research
09/04/2021

MLCTR: A Fast Scalable Coupled Tensor Completion Based on Multi-Layer Non-Linear Matrix Factorization

Firms earning prediction plays a vital role in investment decisions, div...

Please sign up or login with your details

Forgot password? Click here to reset