Non-negative matrix and tensor factorisations with a smoothed Wasserstein loss
Non-negative matrix and tensor factorisations are a classical tool in machine learning and data science for finding low-dimensional representations of high-dimensional datasets. In applications such as imaging, datasets can often be regarded as distributions in a space with metric structure. In such a setting, a Wasserstein loss function based on optimal transportation theory is a natural choice since it incorporates knowledge about the geometry of the underlying space. We introduce a general mathematical framework for computing non-negative factorisations of matrices and tensors with respect to an optimal transport loss, and derive an efficient method for its solution using a convex dual formulation. We demonstrate the applicability of this approach with several numerical examples.
READ FULL TEXT