# Smooth and Sparse Optimal Transport

Entropic regularization is quickly emerging as a new standard in optimal transport (OT). It enables to cast the OT computation as a differentiable and unconstrained convex optimization problem, which can be efficiently solved using the Sinkhorn algorithm. However, the entropy term keeps the transportation plan strictly positive and therefore completely dense, unlike unregularized OT. This lack of sparsity can be problematic for applications where the transportation plan itself is of interest. In this paper, we explore regularizing both the primal and dual original formulations with an arbitrary strongly convex term. We show that this corresponds to relaxing dual and primal constraints with approximate smooth constraints. We show how to incorporate squared 2-norm and group lasso regularizations within that framework, leading to sparse and group-sparse transportation plans. On the theoretical side, we are able to bound the approximation error introduced by smoothing the original primal and dual formulations. Our results suggest that, for the smoothed dual, the approximation error can often be smaller with squared 2-norm regularization than with entropic regularization. We showcase our proposed framework on the task of color transfer.

## Authors

• 23 publications
• 5 publications
• 3 publications
• ### Regularized Discrete Optimal Transport

07/21/2013 ∙ by Sira Ferradans, et al. ∙ 0

• ### Convex Color Image Segmentation with Optimal Transport Distances

This work is about the use of regularized optimal-transport distances fo...
03/06/2015 ∙ by Julien Rabin, et al. ∙ 0

• ### Solving Non-smooth Constrained Programs with Lower Complexity than O(1/ε): A Primal-Dual Homotopy Smoothing Approach

We propose a new primal-dual homotopy smoothing algorithm for a linearly...
09/05/2018 ∙ by Xiaohan Wei, et al. ∙ 0

• ### Optimal transport with f-divergence regularization and generalized Sinkhorn algorithm

Entropic regularization provides a generalization of the original optima...
05/29/2021 ∙ by Dávid Terjék, et al. ∙ 0

• ### Unbalanced Optimal Transport using Integral Probability Metric Regularization

Unbalanced Optimal Transport (UOT) is the generalization of classical op...
11/10/2020 ∙ by J. Saketha Nath, et al. ∙ 0

• ### A General Framework of Dual Certificate Analysis for Structured Sparse Recovery Problems

This paper develops a general theoretical framework to analyze structure...
01/16/2012 ∙ by Cun-Hui Zhang, et al. ∙ 0

• ### Achieving robustness in classification using optimal transport with hinge regularization

We propose a new framework for robust binary classification, with Deep N...
06/11/2020 ∙ by Mathieu Serrurier, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Optimal transport (OT) distances (a.k.a. Wasserstein or earth mover’s distances) are a powerful computational tool to compare probability distributions and have recently found widespread use in machine learning

(Cuturi, 2013; Solomon et al., 2014; Kusner et al., 2015; Courty et al., 2016; Arjovsky et al., 2017)

. While OT distances exhibit a unique ability to capture the geometry of the data, their application to large-scale problems has been largely hampered by their high computational cost. Indeed, computing OT distances involves a linear program, which takes super-cubic time in the data size to solve using state-of-the-art network-flow algorithms. Related to the Schrödinger problem

(Schrödinger, 1931; Léonard, 2012), entropy-regularized OT distances have recently gained popularity due to their desirable properties (Cuturi, 2013). Their computation involves a comparatively easier differentiable and unconstrained convex optimization problem, which can be solved using the Sinkhorn algorithm (Sinkhorn and Knopp, 1967)

. Unlike unregularized OT distances, entropy-regularized OT distances are also differentiable w.r.t. their inputs, enabling their use as a loss function in a machine learning pipeline

(Frogner et al., 2015; Rolet et al., 2016).

Despite its considerable merits, however, entropy-regularized OT has some limitations, such as introducing blurring in the optimal transportation plan. While this nuisance can be reduced by using small regularization, this requires a carefully engineered implementation, since the naive Sinkhorn algorithm is numerically unstable in this regime (Schmitzer, 2016). More importantly, the entropy term keeps the transportation plan strictly positive and therefore completely dense, unlike unregularized OT. This lack of sparsity can be problematic when the optimal transportation plan itself is of interest, e.g., in color transfer (Pitié et al., 2007), domain adaptation (Courty et al., 2016) and ecological inference (Muzellec et al., 2017). Sparsity in these applications is motivated by the principle of parsimony (simple solutions should be preferred) and by the enhanced interpretability of transportation plans.

Our contributions. This background motivates us to study regularization schemes that lead to smooth optimization problems (i.e., differentiable everywhere and with Lipschitz continuous gradient) while retaining the desirable property of sparse transportation plans. To do so, we make the following contributions.

We regularize the primal with an arbitrary strongly convex term and derive the corresponding smoothed dual and semi-dual. Our derivations abstract away regularization-specific terms in an intuitive way (§3). We show how incorporating squared -norm and group-lasso regularizations within that framework leads to sparse solutions. This is illustrated in Figure 1 for squared -norm regularization.

Next, we explore the opposite direction: replacing one or both of the primal marginal constraints with approximate smooth constraints. When using the squared Euclidean distance to approximate the constraints, we show that this can be interpreted as adding squared -norm regularization to the dual (§4). As illustrated in Figure 1, that approach also produces sparse transportation plans.

For both directions, we bound the approximation error caused by regularizing the original OT problem. For the regularized primal, we show that the approximation error of squared -norm regularization can be smaller than that of entropic regularization (§5). Finally, we showcase the proposed approaches empirically on the task of color transfer (§6).

An open-source Python implementation is available at https://github.com/mblondel/smooth-ot.

Notation.

We denote scalars, vectors and matrices using lower-case, bold lower-case and upper-case letters,

e.g., , and , respectively. Given a matrix , we denote its elements by and its columns by . We denote the set by . We use to denote the -norm. When , we simply write . We denote the -dimensional probability simplex by and the Euclidean projection onto it by . We denote , performed element-wise.

## 2 Background

Convex analysis. The convex conjugate of a function is defined by

 f∗(x)\coloneqqsupy∈domf y⊤x−f(y). (1)

If is strictly convex, then the supremum in (1) is uniquely achieved. Then, from Danskin’s theorem (1966), it is equal to the gradient of :

 ∇f∗(x)=argmaxy∈domf y⊤x−f(y). (2)

The dual of a norm is defined by We say that a function is -smooth w.r.t. a norm if it is differentiable everywhere and its gradient is -Lipschitz continuous w.r.t. that norm. Strong convexity plays a crucial role in this paper due to its well-known duality with smoothness: is -strongly convex w.r.t. a norm if and only if is -smooth w.r.t.  (Kakade et al., 2012).

Optimal transport. We focus throughout this paper on OT between discrete probability distributions and . Rather than performing a pointwise comparison of the distributions, OT distances compute the minimal effort, according to some ground cost, for moving the probability mass of one distribution to the other. The modern OT formulation, due to Kantorovich [1942], is cast as a linear program (LP):

 OT(a,b)\coloneqqminT∈U(a,b)⟨T,C⟩, (3)

where is the transportation polytope

 U(a,b)\coloneqq{T∈Rm×n+:T1n=a,T⊤1m=b} (4)

and is a cost matrix. The former can be interpreted as the set of all joint probability distributions with marginals and . Without loss of generality, we will assume and throughout this paper (if or then the th row or the th column of is zero). When and is a distance matrix raised to the power , is a distance on , called the Wasserstein distance of order (Villani, 2003, Theorem 7.3). The dual LP is

 OT(a,b)=maxα,β∈P(C)α⊤a+β⊤b, (5)

where Keeping fixed, an optimal solution w.r.t.  is

 βj=mini∈[m]ci,j−αi,∀j∈[n], (6)

which is the so-called c-transform. Plugging it back into the dual, we get the “semi-dual”

 OT(a,b)=maxα∈Rmα⊤a−n∑j=1bjmaxi∈[m](αi−ci,j). (7)

For a recent and comprehensive survey of computational OT, see (Peyré and Cuturi, 2017).

## 3 Strong primal ↔ Relaxed dual

We study in this section adding strongly convex regularization to the primal problem (3). We define

###### Definition 1

Strongly convex primal

 OTΩ(a,b)\coloneqqminT∈U(a,b)⟨T,C⟩+n∑j=1Ω(tj), (8)

where we assume that is strongly convex over the intersection of and either or .

These assumptions are sufficient for (8) to be strongly convex w.r.t. . On first sight, solving (8) does not seem easier than (3). As we shall now see, the main benefit occurs when switching to the dual.

### 3.1 Smooth relaxed dual formulation

Let the (non-smooth) indicator function of the non-positive orthant be defined as

 δ(x)\coloneqq{0,if x≤0∞,o.w. =supy≥0y⊤x. (9)

To define a smoothed version of , we take the convex conjugate of , restricted to the non-negative orthant:

 δΩ(x)\coloneqqsupy≥0y⊤x−Ω(y). (10)

If is -strongly convex over , then is -smooth and its gradient is , where is the supremum of (10). We next show that plays a crucial role in expressing the dual of (8), which is a smooth optimization problem in and .

###### Proposition 1

Smooth relaxed dual

 OTΩ(a,b)=maxα∈Rmβ∈Rnα⊤a+β⊤b−n∑j=1δΩ(α+βj1m−cj) (11)

The optimal solution of (8) can be recovered from by

For a proof, see Appendix A.1. Intuitively, the hard dual constraints , which we can write , are now relaxed with soft ones by substituting with .

### 3.2 Smoothed semi-dual formulation

We now derive the semi-dual of (8), i.e., the dual (11) with one of the two variables eliminated. Without loss of generality, we proceed to eliminate . To do so, we use the notion of smoothed max operator. Notice that

 max(x)\coloneqqmaxi∈[m]xi=supy∈△my⊤x∀x∈Rm. (12)

This is indeed true, since the supremum is always achieved at one of the simplex vertices. To define a smoothed max operator (Nesterov, 2005), we take the conjugate of , this time restricted to the simplex:

 maxΩ(x)\coloneqqsupy∈△my⊤x−Ω(y). (13)

If is -strongly convex over , then is -smooth and its gradient is defined by , where is the supremum of (13). We next show that plays a crucial role in expressing the conjugate of .

###### Lemma 1

Conjugate of w.r.t. its first argument

 OT∗Ω(α,b)=supa∈△mα⊤a−OTΩ(a,b)=n∑j=1bjmaxΩj(α−cj), (14)

where .

A proof is given in Appendix A.2. With the conjugate, we can now easily express the semi-dual of (8), which involves a smooth optimization problem in .

###### Proposition 2

Smoothed semi-dual

 OTΩ(a,b)=maxα∈Rmα⊤a−OT∗Ω(α,b) (15)

The optimal solution of (8) can be recovered from by

Proof. is a closed and convex function of . Therefore, .

We can interpret this semi-dual as a variant of (7), where the max operator has been replaced with its smoothed counterpart, . Note that , as obtained by solving the smoothed dual (11) or semi-dual (15), is the gradient of w.r.t.  when is unique or a sub-gradient otherwise. This is useful when learning with as a loss, as done with entropic regularization in (Frogner et al., 2015).

Solving the optimization problems. The dual and semi-dual we derived are unconstrained, differentiable and concave optimization problems. They can therefore be solved using gradient-based algorithms, as long as we know how to compute and . In our experiments, we use L-BFGS (Liu and Nocedal, 1989), for both the dual and semi-dual formulations.

### 3.3 Closed-form expressions

We derive in this section closed-form expressions for , and their gradients for specific choices of .

Negative entropy. We choose , where is the entropy. For that choice, we get analytical expressions for , and their gradients (cf. Table 1). Since is -strongly convex w.r.t. the -norm over (Shalev-Shwartz, 2007, Lemma 16), is -strongly smooth w.r.t. the -norm. However, since is only strictly convex over , is differentiable but not smooth. The dual and semi-dual with this were derived in (Cuturi and Doucet, 2014) and (Genevay et al., 2016), respectively.

Next, we present two choices of that induce sparsity in transportation plans. The resulting dual and semi-dual expressions are new, to our knowledge.

Squared -norm. We choose . We again obtain closed-form expressions for , and their gradients (cf. Table 1). Since is -strongly convex w.r.t. the -norm over , both and are -strongly smooth w.r.t. the -norm. Projecting a vector onto the simplex, as required to compute and its gradient, can be done exactly in worst-case time using the algorithm of (Michelot, 1986) and in expected time using the randomized pivot algorithm of (Duchi et al., 2008). Squared -norm regularization can output exactly sparse transportation plans (the primal-dual relationship for (11) is ) and is numerically stable without any particular implementation trick.

Group lasso. Courty et al. (2016) recently proposed to use , where denotes the subvector of restricted to the set , and showed that this regularization improves accuracy in the context of domain adaptation. Since includes a negative entropy term, the same remarks as for negative entropy apply regarding the differentiability of and smoothness of . Unfortunately, a closed-form solution is available for neither (10) nor (13). However, since the log keeps in the strictly positive orthant and is differentiable everywhere in that orthant, we can use any proximal gradient algorithm to solve these problems to arbitrary precision.

A drawback of this choice of , however, is that group sparsity is never truly achieved. To address this issue, we propose to use instead. For that choice, is smooth and is equal to

 δΩ(x)=x⊤y⋆−Ω(y⋆), (16)

where decomposes over groups and equals

 y⋆G=argminyG≥012∥yG−xG/γ∥2+μ∥yG∥=∇δΩ(x)G. (17)

As noted in the context of group-sparse NMF (Kim et al., 2012), (17) admits a closed-form solution

 y⋆G=argminyG12∥∥yG−x+G∥∥2+μ∥yG∥=[1−μ∥x+G∥]+x+G, (18)

where we defined . We have thus obtained an efficient way to compute exact gradients of , making it possible to solve the dual using gradient-based algorithms. In contrast, Courty et al. (2016) use a generalized conditional gradient algorithm whose iterations require expensive calls to Sinkhorn. Finally, because , the obtained transportation plan will be truly group-sparse.

## 4 Relaxed primal ↔ Strong dual

We now explore the opposite way to define smooth OT problems while retaining sparse transportation plans: replace marginal constraints in the primal with approximate constraints. When relaxing both marginal constraints, we define the next formulation:

###### Definition 2

Relaxed smooth primal

 ROTΦ(a,b)\coloneqqminT≥0 ⟨T,C⟩+12Φ(T1n,a)+12Φ(T⊤1m,b), (19)

where is a smooth divergence measure.

We may also relax only one of the marginal constraints:

###### Definition 3

Semi-relaxed smooth primal

 ˜ROTΦ(a,b)\coloneqqminT≥0T⊤1m=b ⟨T,C⟩+Φ(T1n,a) (20)

where is defined as in Definition 2.

For both (19) and (20), the transportation plans will be typically sparse. As discussed in more details in §7, these formulations are similar to (Frogner et al., 2015; Chizat et al., 2016), with the key difference that we do not regularize with an entropic term. In addition, for , we propose to use , which is -smooth, while these works use a generalized Kullback-Leibler (KL) divergence, which is not smooth. Relaxing the marginal constraints is useful when normalizing input measures to unit mass is not suitable (Gramfort et al., 2015) or to allow for only partial displacement of mass. Relaxing only one of the two constraints is useful in color transfer (Rabin et al., 2014), where we would like all the probability mass of the source image to be accounted for but not necessarily for the reference image.

Dual interpretation. As we show in Appendix A.3, in the case , the dual of (19) can be interpreted as the original dual with additional squared -norm regularization on the dual variables and . For the dual of (20), the additional regularization is on only (on the original dual or equivalently on the original semi-dual). For that choice of , the duals of (19) and (20) are strongly convex. The dual formulations are crucial to derive our bounds in §5.

Solving the optimization problems. While the relaxed and semi-relaxed primals (19) and (20) are still constrained problems, it is much easier to project on their constraint domain than on . For the relaxed primal, in our experiments we use L-BFGS-B, a variant of L-BFGS suitable for box-constrained problems (Byrd et al., 1995). For the semi-relaxed primal, we use FISTA (Beck and Teboulle, 2009). Since the constraint domain of (20) has the structure of a Cartesian product , we can easily project any on it by column-wise projection on the (scaled) simplex. Although not exlored in this paper, the block Frank-Wolfe algorithm (Lacoste-Julien et al., 2012) is also a good fit for the semi-relaxed primal.

## 5 Theoretical bounds

Convergence rates. The dual (11) is not smooth in and when using entropic regularization but it is when using the squared -norm, with constant upper-bounded by w.r.t.  and w.r.t. . The semi-dual (15) is smooth for both regularizations, with the same constant of , albeit not in the same norm. The relaxed and semi-relaxed primals (19) and (20) are both -smooth when using . However, none of these problems are strongly convex. From standard convergence analysis of (projected) gradient descent for smooth but non-strongly convex problems, the number of iterations to reach an -accurate solution w.r.t. the smoothed problems is or with Nesterov acceleration.

Approximation error. Because the smoothed problems approach unregularized OT as , there is a trade-off between convergence rate w.r.t. the smoothed problem and approximation error w.r.t. unregularized OT. A question is then which smoothed formulations and which regularizations have better approximation error. Our first theorem bounds in the case of entropic and squared -norm regularization.

###### Theorem 1

Approximation error of

Let and . Then,

 γL≤OTΩ(a,b)−OT(a,b)≤γU, (21)

where we defined and as follows.

Neg. entropy Squared -norm

Proof is given in Appendix A.4. Our result suggests that, for the same , the approximation error can often be smaller with squared -norm than with entropic regularization. In particular, this is true whenever , which is often the case in practice since while . Our second theorem bounds and when is the squared Euclidean distance.

###### Theorem 2

Approximation error of ,

Let , , . Then,

 0≤OT(a,b)−ROTΦ(a,b)≤γL (22) 0≤OT(a,b)−˜ROTΦ(a,b)≤γ˜L (23)

where we defined

 L\coloneqq∥C∥2∞min{ν1+n,ν2+m}2ν1\coloneqqmax{(2+n/m)∣∣∣∣a−1∣∣∣∣∞, ∣∣∣∣b−1∣∣∣∣∞}ν2\coloneqqmax{∣∣∣∣a−1∣∣∣∣∞, (2+m/n)∣∣∣∣b−1∣∣∣∣∞}˜L\coloneqq2||C||2∞∣∣∣∣a−1∣∣∣∣2∞. (24)

Proof is given in Appendix A.5. While the bound for is better than that of , both are worse than that of , suggesting that the smoothed dual formulations are the way to go when low approximation error w.r.t. unregularized OT is important.

## 6 Experimental results

We showcase our formulations on color transfer, which is a classical OT application (Pitié et al., 2007). More experimental results are presented in Appendix C.

### 6.1 Application to color transfer

Experimental setup. Given an image of size , we represent its pixels in RGB color space. We apply k-means clustering to quantize the image down to colors. This produces color centroids . We can count how many pixels were assigned to each centroid and normalizing by gives us a color histogram . We repeat the same process with a second image to obtain and . Next, we apply any of the proposed methods with cost matrix , where is some discrepancy measure, to obtain a (possibly relaxed) transportation plan . For each color centroid , we apply a barycentric projection to obtain a new color centroid

 ^xi\coloneqqargminx∈R3n∑j=1ti,j d(x,yj). (25)

When , as used in our experiments, the above admits a closed-form solution: . Finally, we use the new color for all pixels assigned to . The same process can be performed with respect to the , in order to transfer the colors in the other direction. We use two public domain images “fall foliage” by Bernard Spragg and “comunion” by Abel Maestro Garcia, and reduce the number of colors to . We compare smoothed dual approaches and (semi-)relaxed primal approaches. For the semi-relaxed primal, we also compared with , where is the generalized KL divergence, . This choice is differentiable but not smooth. We ran the aforementioned solvers for up to epochs.

Results. Our results are presented in Figure 2. All formulations clearly produced better results than unregularized OT. With the exception of the entropy-smoothed semi-dual formulation, all formulations produced extremely sparse transportation plans. The semi-relaxed primal formulation with set to the squared Euclidean distance was the only one to produce colors with a darker tone.

### 6.2 Solver and objective comparison

We compared the smoothed dual and semi-dual when using squared -norm regularization. In addition to L-BFGS on both objectives, we also compared with alternating minimization in the dual. As we show in Appendix B, exact block minimization w.r.t.  and can be carried out by projection onto the simplex.

Results. We ran the comparison using the same data as in §6.1. Results are indicated in Figure 3. When the problem is loosely regularized, we made two key findings: i) L-BFGS converges much faster in the semi-dual than in the dual, ii) alternating minimization converges extremely slowly. The reason for i) could be the better smoothness constant of the semi-dual (cf. §5). Since alternating minimization and the semi-dual have roughly the same cost per iteration (cf. Appendix B), the reason for ii) is not iteration cost but a convergence issue of alternating minimization. When using larger regularization, L-BFGS appears to converge slighly faster on the dual than on the semi-dual, which is likely thanks to its cheap-to-compute gradients.

### 6.3 Approximation error comparison

We compared empirically the approximation error of smoothed formulations w.r.t. unregularized OT according to four criteria: transportation plan error, marginal constraint error, value error and regularized value error (cf. Figure 4 for a precise definition). For the dual approaches, we solved the smoothed semi-dual objective (15), since, as we discussed in §5, it has the same smoothness constant of for both entropic and squared -norm regularizations, implying similar convergence rates in theory. In addition, in the case of entropic regularization, the expressions of and are trivial to stabilize numerically using standard log-sum-exp implementation tricks.

Results. We ran the comparison using the same data as in §6.1. Results are indicated in Figure 4. For the transportation plan error and the (regularized) value error, entropic regularization required 100 times smaller to achieve the same error. This confirms, as suggested by Theorem 1, that squared -norm regularization is typically tighter. Unsurprisingly, the semi-relaxed primal was tighter than the relaxed primal in all four criteria. A runtime comparison of smoothed formulations is also important. However, a rigorous comparison would require carefully engineered implementations and is therefore left for future work.

## 7 Related work

Regularized OT. Problems similar to (8) for general were considered in (Dessein et al., 2016). Their work focuses on strictly convex and differentiable for which there exists an associated Bregman divergence. Following (Benamou et al., 2015), they show that (8) can then be reformulated as a Bregman projection onto the transportation polytope and solved using Dykstra’s algorithm [1985]. While Dykstra’s algorithm can be interpreted implicitly as a two-block alternating minimization scheme on the dual problem, neither the dual nor the semi-dual expressions were derived. These expressions allow us to make use of arbitrary solvers, including quasi-Newton ones like L-BFGS, which as we showed empirically, converge much faster on loosely regularized problems. Our framework can also accomodate non-differentiable regularizations for which there does not exist an associated Bregman divergence, such as those that include a group lasso term. Squared -norm regularization was recently considered in (Li et al., 2016) as well as in (Essid and Solomon, 2017) but for a reformulation of the Wasserstein distance of order as a min cost flow problem along the edges of a graph.

Relaxed OT. There has been a large number of proposals to extend OT to unbalanced positive measures. Static formulations with approximate marginal constraints based on the KL divergence have been proposed in (Frogner et al., 2015; Chizat et al., 2016). The main difference with our work is that these formulations include an additional entropic regularization on . While this entropic term enables a Sinkhorn-like algorithm, it also prevents from obtaining sparse and requires the tuning of an additional hyper-parameter. Relaxing only one of the two marginal constraints with an inequality was investigated for color transfer in (Rabin et al., 2014). Benamou (2003)

considered an interpolation between OT and squared Euclidean distances:

 minx∈△mOT(x,b)+12γ∥x−a∥2. (26)

While on first sight this looks quite different, this is in fact equivalent to our semi-relaxed primal formulation when since (26) is equal to

 minx∈△mminT≥0T1n=xT⊤1m=b⟨T,C⟩+12γ∥x−a∥2 (27) = minT≥0T⊤1m=b⟨T,C⟩+12γ∥T1n−a∥2=˜ROTΦ(a,b). (28)

However, the bounds in §5 are to our knowledge new. A similar formulation but with a group-lasso penalty on instead of was considered in the context of convex clustering (Carli et al., 2013).

Smoothed LPs. Smoothed linear programs have been investigated in other contexts. The two closest works to ours are (Meshi et al., 2015b) and (Meshi et al., 2015a), in which smoothed LP relaxations based on the squared -norm are proposed for maximum a-posteriori inference. One innovation we make compared to these works is to abstract away the regularization by introducing the and functions.

## 8 Conclusion

We proposed in this paper to regularize both the primal and dual OT formulations with a strongly convex term, and showed that this corresponds to relaxing the dual and primal constraints with smooth approximations. There are several important avenues for future work. The conjugate expression (14) should be useful for barycenter computation (Cuturi and Peyré, 2016) or dictionary learning (Rolet et al., 2016) with squared -norm instead of entropic regularization. On the theoretical side, while we provided convergence guarantees w.r.t. the OT distance value as the regularization vanishes, which suggested the advantage of squared -norm regularization, it would also be important to study the convergence w.r.t. the transportation plan, as was done for entropic regularization by Cominetti and San Martín (1994). Finally, studying optimization algorithms that can cope with large-scale data is important. We believe SAGA (Defazio et al., 2014) is a good candidate since it is stochastic, supports proximity operators, is adaptive to non-strongly convex problems and can be parallelized (Leblond et al., 2017).

## Acknowledgements

We thank Arthur Mensch and the anonymous reviewers for constructive comments.

## References

• kan (1942) On the transfer of masses (in russian). Doklady Akademii Nauk, 37(2):227–229, 1942.
• Altschuler et al. (2017) Jason Altschuler, Jonathan Wee, and Philippe Rigollet. Near-linear time approximation algorithms for optimal transport via sinkhorn iteration. In Proc. of NIPS, 2017.
• Arjovsky et al. (2017) Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proc. of ICML, volume 70, pages 214–223, 2017.
• Beck and Teboulle (2009) Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
• Benamou (2003) Jean-David Benamou. Numerical resolution of an “unbalanced” mass transport problem. ESAIM: Mathematical Modelling and Numerical Analysis, 37(5):851–868, 2003.
• Benamou et al. (2015) Jean-David Benamou, Guillaume Carlier, Marco Cuturi, Luca Nenna, and Gabriel Peyré. Iterative bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111–A1138, 2015.
• Boyd and Vandenberghe (2004) Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
• Byrd et al. (1995) Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on Scientific Computing, 16(5):1190–1208, 1995.
• Calvillo and Romero (2016) Gilberto Calvillo and David Romero. On the closest point to the origin in transportation polytopes. Discrete Appl. Math., 210:88–102, 2016.
• Carli et al. (2013) Francesca P Carli, Lipeng Ning, and Tryphon T Georgiou. Convex clustering via optimal mass transport. arXiv preprint arXiv:1307.5459, 2013.
• Chizat et al. (2016) Lenaic Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Scaling algorithms for unbalanced transport problems. arXiv preprint arXiv:1607.05816, 2016.
• Cominetti and San Martín (1994) Roberto Cominetti and Jaime San Martín. Asymptotic analysis of the exponential penalty trajectory in linear programming. Mathematical Programming, 67(1-3):169–187, 1994.
• Courty et al. (2016) Nicolas Courty, Rémi Flamary, Devis Tuia, and Alain Rakotomamonjy. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 2016.
• Cover and Thomas (2006) Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley, 2006.
• Cuturi (2013) Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Proc. of NIPS, pages 2292–2300. 2013.
• Cuturi and Doucet (2014) Marco Cuturi and Arnaud Doucet. Fast computation of wasserstein barycenters. In International Conference on Machine Learning, pages 685–693, 2014.
• Cuturi and Peyré (2016) Marco Cuturi and Gabriel Peyré. A smoothed dual approach for variational wasserstein problems. SIAM Journal on Imaging Sciences, 9(1):320–343, 2016.
• Danskin (1966) John M Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641–664, 1966.
• Defazio et al. (2014) Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Proc. of NIPS, pages 1646–1654, 2014.
• Dessein et al. (2016) Arnaud Dessein, Nicolas Papadakis, and Jean-Luc Rouas. Regularized optimal transport and the rot mover’s distance. arXiv preprint arXiv:1610.06447, 2016.
• Duchi et al. (2008) John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the -ball for learning in high dimensions. In Proc. of ICML, 2008.
• Dykstra (1985) Richard L Dykstra. An iterative procedure for obtaining i-projections onto the intersection of convex sets. The annals of Probability, pages 975–984, 1985.
• Essid and Solomon (2017) Montacer Essid and Justin Solomon. Quadratically-regularized optimal transport on graphs. arXiv preprint arXiv:1704.08200, 2017.
• Frogner et al. (2015) Charlie Frogner, Chiyuan Zhang, Hossein Mobahi, Mauricio Araya, and Tomaso A Poggio. Learning with a wasserstein loss. In Proc. of NIPS, pages 2053–2061, 2015.
• Genevay et al. (2016) Aude Genevay, Marco Cuturi, Gabriel Peyré, and Francis Bach. Stochastic optimization for large-scale optimal transport. In Proc. of NIPS, pages 3440–3448. 2016.
• Gramfort et al. (2015) Alexandre Gramfort, Gabriel Peyré, and Marco Cuturi. Fast optimal transport averaging of neuroimaging data. In Proc. of International Conference on Information Processing in Medical Imaging, pages 261–272, 2015.
• Kakade et al. (2012) Sham M Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Regularization techniques for learning with matrices. Journal of Machine Learning Research, 13:1865–1890, 2012.
• Kim et al. (2012) Jingu Kim, Renato DC Monteiro, and Haesun Park. Group sparsity in nonnegative matrix factorization. In Proc. of SIAM International Conference on Data Mining, pages 851–862, 2012.
• Kusner et al. (2015) Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. From word embeddings to document distances. In Proc. of ICML, pages 957–966, 2015.
• Lacoste-Julien et al. (2012) Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, and Patrick Pletscher. Block-coordinate Frank-Wolfe optimization for structural SVMs. In Proc. of ICML, 2012.
• Leblond et al. (2017) Rémi Leblond, Fabian Pedregosa, and Simon Lacoste-Julien. ASAGA: Asynchronous Parallel SAGA. In

Proceedings of the 20th International Conference on Artificial Intelligence and Statistics

, volume 54, pages 46–54, 2017.
• Léonard (2012) Christian Léonard. A survey of the Schrödinger problem and some of its connections with optimal transport. Discrete and Continuous Dynamical Systems, 34:1879––1920, 2012.
• Li et al. (2016) Wuchen Li, Stanley Osher, and Wilfrid Gangbo. A fast algorithm for earth mover’s distance based on optimal transport and l1 type regularization. arXiv preprint arXiv:1609.07092, 2016.
• Liu and Nocedal (1989) Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1):503–528, 1989.
• Meshi et al. (2012) Ofer Meshi, Amir Globerson, and Tommi S. Jaakkola. Convergence rate analysis of map coordinate minimization algorithms. In Proc. of NIPS, pages 3014–3022. 2012.
• Meshi et al. (2015a) Ofer Meshi, Mehrdad Mahdavi, and Alexander G. Schwing. Smooth and strong: MAP inference with linear convergence. In Proc. of NIPS, 2015a.
• Meshi et al. (2015b) Ofer Meshi, Nathan Srebro, and Tamir Hazan. Efficient training of structured svms via soft constraints. In Proc. of AISTATS, pages 699–707, 2015b.
• Michelot (1986) Christian Michelot. A finite algorithm for finding the projection of a point onto the canonical simplex of . Journal of Optimization Theory and Applications, 50(1):195–200, 1986.
• Muzellec et al. (2017) Boris Muzellec, Richard Nock, Giorgio Patrini, and Frank Nielsen. Tsallis regularized optimal transport and ecological inference. In AAAI, pages 2387–2393, 2017.
• Nesterov (2005) Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127–152, 2005.
• Peyré and Cuturi (2017) Gabriel Peyré and Marco Cuturi. Computational Optimal Transport. 2017.
• Pitié et al. (2007) François Pitié, Anil C Kokaram, and Rozenn Dahyot. Automated colour grading using colour distribution transfer. Computer Vision and Image Understanding, 107(1):123–137, 2007.
• Rabin et al. (2014) Julien Rabin, Sira Ferradans, and Nicolas Papadakis. Adaptive color transfer with relaxed optimal transport. In Proc. of International Conference on Image Processing, pages 4852–4856, 2014.
• Rolet et al. (2016) Antoine Rolet, Marco Cuturi, and Gabriel Peyré. Fast dictionary learning with a smoothed wasserstein loss. In Proc. of AISTATS, pages 630–638, 2016.
• Romero (1990) David Romero. Easy transportation-like problems on k-dimensional arrays. Journal of Optimization Theory and Applications, 66:137–147, 1990.
• Schmitzer (2016) Bernhard Schmitzer. Stabilized sparse scaling algorithms for entropy regularized transport problems. arXiv preprint arXiv:1610.06519, 2016.
• Schrödinger (1931) Erwin Schrödinger. Über die umkehrung der naturgesetze. Phys. Math., 144:144––153, 1931.
• Shalev-Shwartz (2007) Shai Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew University of Jerusalem, 2007.
• Sinkhorn and Knopp (1967) Richard Sinkhorn and Paul Knopp. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2):343–348, 1967.
• Solomon et al. (2014) Justin Solomon, Raif Rustamov, Guibas Leonidas, and Adrian Butscher.

Wasserstein propagation for semi-supervised learning.

In Proc. of ICML, pages 306–314, 2014.
• Villani (2003) Cédric Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003.

## Appendix A Proofs

### a.1 Derivation of the smooth relaxed dual

Recall that

 OTΩ(a,b)=minT∈U(a,b)n∑j=1t⊤jcj+Ω(tj). (29)

We now add Lagrange multipliers for the two equality constraints but keep the constraint explicitly:

 OTΩ(a,b)=minT≥0maxα∈Rm,β∈Rnn∑j=1t⊤jcj+Ω(tj)+α⊤(T1n−a)+β⊤(T⊤1m−b). (30)

Since (29) is a convex optimization problem with only linear equality and inequality constraints, Slater’s conditions reduce to feasibility (Boyd and Vandenberghe, 2004, §5.2.3) and hence strong duality holds:

 OTΩ(a,b) =maxα∈Rm,β∈RnminT≥0n∑j=1t⊤jcj+Ω(tj)+α⊤(T1n−a)+β⊤(T⊤1m−b) (31) =maxα∈Rm,β∈Rnn∑j=1mintj≥0t⊤j(cj+α+βj1m)+Ω(tj)−α⊤a−β⊤b (32) =maxα∈Rm,β∈Rn−n∑j=1maxtj≥0t⊤j(−cj−α−βj1m)−Ω(tj)−α⊤a−β⊤b (33) =maxα∈Rm,β∈Rnα⊤a+β⊤b−n∑j=1maxtj≥0t⊤j(α+βj1m−cj)−Ω(tj). (34)

Finally, plugging the expression of (10) gives the claimed result.

### a.2 Derivation of the convex conjugate

The convex conjugate of w.r.t. the first argument is

 OT∗Ω(g,b)=supa∈△mg⊤a−OTΩ(a,b). (35)

Following a similar argument as (Cuturi and Peyré, 2016, Theorem 2.4), we have

 OT∗Ω(g,b)=maxT≥0T⊤1m=b⟨T,g1⊤n−C⟩−n∑j=1Ω(tj). (36)

Notice that this is an easier optimization problem than (8), since there are equality constraints only in one direction. Cuturi and Peyré (2016) showed that this optimization problem admits a closed form in the case of entropic regularization. Here, we show how to compute for any strongly-convex regularization.

The problem clearly decomposes over columns and we can rewrite it as

 OT∗Ω(g,b) =n∑j=1maxtj≥0t⊤j1m=bjt⊤j(g−cj)−Ω(tj) (37) =n∑j=1bjmaxτj∈△mτ⊤j(g−cj)−1bjΩ(bjτj) (38) =n∑j=1bjmaxΩj(g−cj), (39)

where we defined and where is defined in (13).

### a.3 Expression of the strongly-convex duals

Using a similar derivation as before, we obtain the duals of (19) and (20).

###### Proposition 3

Duals of (19) and (20)

 ROTΦ(a,b) =maxα,β∈P(C)−12Φ∗(−2α,a)−12Φ∗(−2β,b) (40) ˜ROTΦ(a,b) =maxα,β∈P(C)−Φ∗(−α,a)+β⊤b (41) =maxα∈Rm−Φ∗(−α,a)−n∑j=1bjmaxi∈[m](αi−ci,j), (42)

where is the conjugate of in the first argument.

The duals are strongly convex if is smooth.
When , . Plugging that expression in the above, we get

 ROTΦ(a,b)=maxα,β∈P(C)α⊤a+β⊤b−γ(∥α∥2+∥β∥2) (43)

and

 ˜ROTΦ(a,b) =maxα,β∈P(C)α⊤a+β⊤b−γ2∥α∥2 (44) =maxα∈Rmα⊤a−n∑j=1bjmaxi∈[m](αi−ci,j)−γ2∥α∥2. (45)

This corresponds to the original dual and semi-dual with squared -norm regularization on the variables.

### a.4 Proof of Theorem 1

Before proving the theorem, we introduce the next two lemmas, which bound the regularization value achieved by any transportation plan.

###### Lemma 2

Bounding the entropy of a transportation plan

Let and be the joint entropy.
Let , and . Then,

 max{H(a),H(b)}≤H(T)≤H(a)+H(b). (46)

Proof. See, for instance, (Cover and Thomas, 2006).

Together with and , this provides lower and upper bounds for the entropy of a transportation plan. As noted in (Cuturi, 2013), the upper bound is tight since

 maxT∈U(a,b)H(T)=H(ab⊤)=H(a)+H(b). (47)
###### Lemma 3

Bounding the squared -norm of a transportation plan

Let , and . Then,

 m∑i=1n∑j=1(ain+bjm−1mn)2≤∥T∥2≤min{∥a∥2,∥b∥2}. (48)

Proof. The tightest lower bound is given by . An exact iterative algorithm was proposed in (Calvillo and Romero, 2016) to solve this problem. However, since we are interested in an explicit formula, we consider instead the lower bound (i.e., we ignore the non-negativity constraint). It is known (Romero, 1990) that the minimum is achieved at , hence our lower bound. For the upper bound, we have

 ∥T∥2 =m∑i=1n∑j=1t2i,j (49) =m∑i=1n∑j=1(aiti,jai