Rethinking Initialization of the Sinkhorn Algorithm
Computing an optimal transport (OT) coupling between distributions plays an increasingly important role in machine learning. While OT problems can be solved as linear programs, adding an entropic smoothing term is known to result in solvers that are faster and more robust to outliers, differentiable and easier to parallelize. The Sinkhorn fixed point algorithm is the cornerstone of these approaches, and, as a result, multiple attempts have been made to shorten its runtime using, for instance, annealing, momentum or acceleration. The premise of this paper is that initialization of the Sinkhorn algorithm has received comparatively little attention, possibly due to two preconceptions: as the regularized OT problem is convex, it may not be worth crafting a tailored initialization as any is guaranteed to work; secondly, because the Sinkhorn algorithm is often differentiated in end-to-end pipelines, data-dependent initializations could potentially bias gradient estimates obtained by unrolling iterations. We challenge this conventional wisdom and show that carefully chosen initializations can result in dramatic speed-ups, and will not bias gradients which are computed with implicit differentiation. We detail how initializations can be recovered from closed-form or approximate OT solutions, using known results in the 1D or Gaussian settings. We show empirically that these initializations can be used off-the-shelf, with little to no tuning, and result in consistent speed-ups for a variety of OT problems.
READ FULL TEXT