DeepAI AI Chat
Log In Sign Up

Generalized Canonical Polyadic Tensor Decomposition

by   David Hong, et al.

Tensor decomposition is a fundamental unsupervised machine learning method in data science, with applications including network analysis and sensor data processing. This work develops a generalized canonical polyadic (GCP) low-rank tensor decomposition that allows other loss functions besides squared error. For instance, we can use logistic loss or Kullback-Leibler divergence, enabling tensor decomposition for binary or count data. We present a variety statistically-motivated loss functions for various scenarios. We provide a generalized framework for computing gradients and handling missing data that enables the use of standard optimization methods for fitting the model. We demonstrate the flexibility of GCP on several real-world examples including interactions in a social network, neural activity in a mouse, and monthly rainfall measurements in India.


page 1

page 2

page 3

page 4


Stochastic Gradients for Large-Scale Tensor Decomposition

Tensor decomposition is a well-known tool for multiway data analysis. Th...

Canonical Polyadic Decomposition via the generalized Schur decomposition

The canonical polyadic decomposition (CPD) is a fundamental tensor decom...

Sparse Logistic Tensor Decomposition for Binary Data

Tensor data are increasingly available in many application domains. We d...

Stochastic Mirror Descent for Low-Rank Tensor Decomposition Under Non-Euclidean Losses

This work considers low-rank canonical polyadic decomposition (CPD) unde...

A recursive eigenspace computation for the Canonical Polyadic decomposition

The canonical polyadic decomposition (CPD) is a compact decomposition wh...

Model-Free State Estimation Using Low-Rank Canonical Polyadic Decomposition

As electric grids experience high penetration levels of renewable genera...