Stochastic Mirror Descent for Low-Rank Tensor Decomposition Under Non-Euclidean Losses

by   Wenqiang Pu, et al.

This work considers low-rank canonical polyadic decomposition (CPD) under a class of non-Euclidean loss functions that frequently arise in statistical machine learning and signal processing. These loss functions are often used for certain types of tensor data, e.g., count and binary tensors, where the least squares loss is considered unnatural.Compared to the least squares loss, the non-Euclidean losses are generally more challenging to handle. Non-Euclidean CPD has attracted considerable interests and a number of prior works exist. However, pressing computational and theoretical challenges, such as scalability and convergence issues, still remain. This work offers a unified stochastic algorithmic framework for large-scale CPD decomposition under a variety of non-Euclidean loss functions. Our key contribution lies in a tensor fiber sampling strategy-based flexible stochastic mirror descent framework. Leveraging the sampling scheme and the multilinear algebraic structure of low-rank tensors, the proposed lightweight algorithm ensures global convergence to a stationary point under reasonable conditions. Numerical results show that our framework attains promising non-Euclidean CPD performance. The proposed framework also exhibits substantial computational savings compared to state-of-the-art methods.



There are no comments yet.


page 1


Stochastic Gradients for Large-Scale Tensor Decomposition

Tensor decomposition is a well-known tool for multiway data analysis. Th...

Block-Randomized Stochastic Proximal Gradient for Low-Rank Tensor Factorization

This work considers the problem of computing the canonical polyadic deco...

Nonlinear Least Squares for Large-Scale Machine Learning using Stochastic Jacobian Estimates

For large nonlinear least squares loss functions in machine learning we ...

Generalized Canonical Polyadic Tensor Decomposition

Tensor decomposition is a fundamental unsupervised machine learning meth...

The Epsilon-Alternating Least Squares for Orthogonal Low-Rank Tensor Approximation and Its Global Convergence

The epsilon alternating least squares (ϵ-ALS) is developed and analyzed ...

Model-Free State Estimation Using Low-Rank Canonical Polyadic Decomposition

As electric grids experience high penetration levels of renewable genera...

Sketchy Empirical Natural Gradient Methods for Deep Learning

In this paper, we develop an efficient sketchy empirical natural gradien...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.