DeepAI AI Chat
Log In Sign Up

On the Relationships between Transform-Learning NMF and Joint-Diagonalization

by   Sixin Zhang, et al.

Non-negative matrix factorization with transform learning (TL-NMF) is a recent idea that aims at learning data representations suited to NMF. In this work, we relate TL-NMF to the classical matrix joint-diagonalization (JD) problem. We show that, when the number of data realizations is sufficiently large, TL-NMF can be replaced by a two-step approach – termed as JD+NMF – that estimates the transform through JD, prior to NMF computation. In contrast, we found that when the number of data realizations is limited, not only is JD+NMF no longer equivalent to TL-NMF, but the inherent low-rank constraint of TL-NMF turns out to be an essential ingredient to learn meaningful transforms for NMF.


page 1

page 2

page 3

page 4


A particle-based variational approach to Bayesian Non-negative Matrix Factorization

Bayesian Non-negative Matrix Factorization (NMF) is a promising approach...

A Quasi-Newton algorithm on the orthogonal manifold for NMF with transform learning

Nonnegative matrix factorization (NMF) is a popular method for audio spe...

Permuted NMF: A Simple Algorithm Intended to Minimize the Volume of the Score Matrix

Non-Negative Matrix Factorization, NMF, attempts to find a number of arc...

Deep Recurrent NMF for Speech Separation by Unfolding Iterative Thresholding

In this paper, we propose a novel recurrent neural network architecture ...

There and Back Again: A General Approach to Learning Sparse Models

We propose a simple and efficient approach to learning sparse models. Ou...