Linear Time Sinkhorn Divergences using Positive Features
Although Sinkhorn divergences are now routinely used in data sciences to compare probability distributions, the computational effort required to compute them remains expensive, growing in general quadratically in the size n of the support of these distributions. Indeed, solving optimal transport (OT) with an entropic regularization requires computing a n× n kernel matrix (the neg-exponential of a n× n pairwise ground cost matrix) that is repeatedly applied to a vector. We propose to use instead ground costs of the form c(x,y)=-logφ(x)φ(y) where φ is a map from the ground space onto the positive orthant ^r_+, with r≪ n. This choice yields, equivalently, a kernel k(x,y)=φ(x)φ(y), and ensures that the cost of Sinkhorn iterations scales as O(nr). We show that usual cost functions can be approximated using this form. Additionaly, we take advantage of the fact that our approach yields approximation that remain fully differentiable with respect to input distributions, as opposed to previously proposed adaptive low-rank approximations of the kernel matrix, to train a faster variant of OT-GAN <cit.>.
READ FULL TEXT