Learning with a Wasserstein Loss

06/17/2015 ∙ by Charlie Frogner, et al. ∙ MIT shell 0

Learning to predict multi-label outputs is challenging, but in many problems there is a natural metric on the outputs that can be used to improve predictions. In this paper we develop a loss function for multi-label learning, based on the Wasserstein distance. The Wasserstein distance provides a natural notion of dissimilarity for probability measures. Although optimizing with respect to the exact Wasserstein distance is costly, recent work has described a regularized approximation that is efficiently computed. We describe an efficient learning algorithm based on this regularization, as well as a novel extension of the Wasserstein distance from probability measures to unnormalized measures. We also describe a statistical learning bound for the loss. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space. We demonstrate this property on a real-data tag prediction problem, using the Yahoo Flickr Creative Commons dataset, outperforming a baseline that doesn't use the metric.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 15

page 17

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

We consider the problem of learning to predict a non-negative measure over a finite set. This problem includes many common machine learning scenarios. In multiclass classification, for example, one often predicts a vector of scores or probabilities for the classes. And in semantic segmentation

[1], one can model the segmentation as being the support of a measure defined over the pixel locations. Many problems in which the output of the learning machine is both non-negative and multi-dimensional might be cast as predicting a measure.

We specifically focus on problems in which the output space has a natural metric or similarity structure, which is known (or estimated)

a priori

. In practice, many learning problems have such structure. In the ImageNet Large Scale Visual Recognition Challenge [ILSVRC]

[2], for example, the output dimensions correspond to 1000 object categories that have inherent semantic relationships, some of which are captured in the WordNet hierarchy that accompanies the categories. Similarly, in the keyword spotting task from the IARPA Babel speech recognition project, the outputs correspond to keywords that likewise have semantic relationships. In what follows, we will call the similarity structure on the label space the ground metric or semantic similarity.

Using the ground metric, we can measure prediction performance in a way that is sensitive to relationships between the different output dimensions. For example, confusing dogs with cats might be more severe an error than confusing breeds of dogs. A loss function that incorporates this metric might encourage the learning algorithm to favor predictions that are, if not completely accurate, at least semantically similar to the ground truth.

Figure 1: Semantically near-equivalent classes in ILSVRC

In this paper, we develop a loss function for multi-label learning that measures the Wasserstein distance between a prediction and the target label, with respect to a chosen metric on the output space. The Wasserstein distance is defined as the cost of the optimal transport plan for moving the mass in the predicted measure to match that in the target, and has been applied to a wide range of problems, including barycenter estimation [3], label propagation [4], and clustering [5]

. To our knowledge, this paper represents the first use of the Wasserstein distance as a loss for supervised learning.

We briefly describe a case in which the Wasserstein loss improves learning performance. The setting is a multiclass classification problem in which label noise arises from confusion of semantically near-equivalent categories. Figure 1 shows such a case from the ILSVRC, in which the categories Siberian husky and Eskimo dog are nearly indistinguishable. We synthesize a toy version of this problem by identifying categories with points in the Euclidean plane and randomly switching the training labels to nearby classes. The Wasserstein loss yields predictions that are closer to the ground truth, robustly across all noise levels, as shown in Figure 2. The standard multiclass logistic loss is the baseline for comparison. Section E.1 in the Appendix describes the experiment in more detail.

Grid Size

Distance

Divergence

Wasserstein

Noise

Distance

Divergence

Wasserstein

Figure 2: The Wasserstein loss encourages predictions that are similar to ground truth, robustly to incorrect labeling of similar classes (see Appendix E.1). Shown is Euclidean distance between prediction and ground truth vs. (left) number of classes, averaged over different noise levels and (right) noise level, averaged over number of classes. Baseline is the multiclass logistic loss.

The main contributions of this paper are as follows. We formulate the problem of learning with prior knowledge of the ground metric, and propose the Wasserstein loss as an alternative to traditional information divergence-based loss functions. Specifically, we focus on empirical risk minimization (ERM) with the Wasserstein loss, and describe an efficient learning algorithm based on entropic regularization of the optimal transport problem. We also describe a novel extension to unnormalized measures that is similarly efficient to compute. We then justify ERM with the Wasserstein loss by showing a statistical learning bound. Finally, we evaluate the proposed loss on both synthetic examples and a real-world image annotation problem, demonstrating benefits for incorporating an output metric into the loss.

2 Related work

Decomposable loss functions like KL Divergence and distances are very popular for probabilistic [1] or vector-valued [6] predictions, as each component can be evaluated independently, often leading to simple and efficient algorithms. The idea of exploiting smoothness in the label space according to a prior metric has been explored in many different forms, including regularization [7] and post-processing with graphical models [8]

. Optimal transport provides a natural distance for probability distributions over metric spaces. In

[3, 9], the optimal transport is used to formulate the Wasserstein barycenter as a probability distribution with minimum total Wasserstein distance to a set of given points on the probability simplex. [4] propagates histogram values on a graph by minimizing a Dirichlet energy induced by optimal transport. The Wasserstein distance is also used to formulate a metric for comparing clusters in [5]

, and is applied to image retrieval

[10], contour matching [11], and many other problems [12, 13]. However, to our knowledge, this is the first time it is used as a loss function in a discriminative learning framework. The closest work to this paper is a theoretical study [14] of an estimator that minimizes the optimal transport cost between the empirical distribution and the estimated distribution in the setting of statistical parameter estimation.

3 Learning with a Wasserstein loss

3.1 Problem setup and notation

We consider the problem of learning a map from into the space of measures over a finite set of size . Assume possesses a metric , which is called the ground metric. measures semantic similarity between dimensions of the output, which correspond to the elements of . We perform learning over a hypothesis space of predictors , parameterized by

. These might be linear logistic regression models, for example.

In the standard statistical learning setting, we get an i.i.d. sequence of training examples

, sampled from an unknown joint distribution

. Given a measure of performance (a.k.a. risk) , the goal is to find the predictor that minimizes the expected risk . Typically is difficult to optimize directly and the joint distribution is unknown, so learning is performed via empirical risk minimization. Specifically, we solve

(1)

with a loss function acting as a surrogate of .

3.2 Optimal transport and the exact Wasserstein loss

Information divergence-based loss functions are widely used in learning with probability-valued outputs. Along with other popular measures like Hellinger distance and distance, these divergences treat the output dimensions independently, ignoring any metric structure on .

Given a cost function , the optimal transport distance [15] measures the cheapest way to transport the mass in probability measure to match that in :

(2)

where is the set of joint probability measures on having and as marginals. An important case is that in which the cost is given by a metric or its -th power with . In this case, (2) is called a Wasserstein distance [16], also known as the earth mover’s distance [10]. In this paper, we only work with discrete measures. In the case of probability measures, these are histograms in the simplex . When the ground truth and the output of both lie in the simplex , we can define a Wasserstein loss.

Definition 3.1 (Exact Wasserstein Loss).

For any , , let be the predicted value at element , given input . Let be the ground truth value for given by the corresponding label . Then we define the exact Wasserstein loss as

(3)

where is the distance matrix , and the set of valid transport plans is

(4)

where is the all-one vector.

is the cost of the optimal plan for transporting the predicted mass distribution to match the target distribution . The penalty increases as more mass is transported over longer distances, according to the ground metric .

4 Efficient optimization via entropic regularization

Given , , , . (, if , unnormalized.)
while  has not converged do
     
end while
If , unnormalized:
Algorithm 1 Gradient of the Wasserstein loss

To do learning, we optimize the empirical risk minimization functional (1) by gradient descent. Doing so requires evaluating a descent direction for the loss, with respect to the predictions . Unfortunately, computing a subgradient of the exact Wasserstein loss (3), is quite costly, as follows.

The exact Wasserstein loss (3

) is a linear program and a subgradient of its solution can be computed using Lagrange duality. The dual LP of (

3) is

(5)

As (3) is a linear program, at an optimum the values of the dual and the primal are equal (see, e.g. [17]), hence the dual optimal is a subgradient of the loss with respect to its first argument.

Computing is costly, as it entails solving a linear program with contraints, with being the dimension of the output space. This cost can be prohibitive when optimizing by gradient descent.

4.1 Entropic regularization of optimal transport

Cuturi [18] proposes a smoothed transport objective that enables efficient approximation of both the transport matrix in (3) and the subgradient of the loss. [18] introduces an entropic regularization term that results in a strictly convex problem:

(6)

Importantly, the transport matrix that solves (6) is a diagonal scaling of a matrix :

(7)

for and , where and are the Lagrange dual variables for (6).

Identifying such a matrix subject to equality constraints on the row and column sums is exactly a matrix balancing problem, which is well-studied in numerical linear algebra and for which efficient iterative algorithms exist [19]. [18] and [3] use the well-known Sinkhorn-Knopp algorithm.

4.2 Extending smoothed transport to the learning setting

When the output vectors and lie in the simplex, (6) can be used directly in place of (3), as (6) can approximate the exact Wasserstein distance closely for large enough [18]. In this case, the gradient of the objective can be obtained from the optimal scaling vector as . 111Note that is only defined up to a constant shift: any upscaling of the vector can be paired with a corresponding downscaling of the vector (and vice versa) without altering the matrix . The choice ensures that is tangent to the simplex. A Sinkhorn iteration for the gradient is given in Algorithm 1.

For many learning problems, however, a normalized output assumption is unnatural. In image segmentation, for example, the target shape is not naturally represented as a histogram. And even when the prediction and the ground truth are constrained to the simplex, the observed label can be subject to noise that violates the constraint.

There is more than one way to generalize optimal transport to unnormalized measures, and this is a subject of active study [20]. We will develop here a novel objective that deals effectively with the difference in total mass between and while still being efficient to optimize.

(a) Convergence to smoothed transport.
(b) Approximation of exact Wasserstein.
(c) Convergence of alternating projections ().
Figure 3: The relaxed transport problem (8) for unnormalized measures.

4.3 Relaxed transport

We propose a novel relaxation that extends smoothed transport to unnormalized measures. By replacing the equality constraints on the transport marginals in (6) with soft penalties with respect to KL divergence, we get an unconstrained approximate transport problem. The resulting objective is:

(8)

where is the generalized KL divergence between . Here represents element-wise division. As with the previous formulation, the optimal transport matrix with respect to (8) is a diagonal scaling of the matrix .

Proposition 4.1.

The transport matrix optimizing (8) satisfies , where , , and .

And the optimal transport matrix is a fixed point for a Sinkhorn-like iteration. 222Note that, although the iteration suggested by Proposition 4.2 is observed empirically to converge (see Figure 3c, for example), we have not proven a guarantee that it will do so.

Proposition 4.2.

optimizing (8) satisfies: i) , and ii) , where represents element-wise multiplication.

Unlike the previous formulation, (8) is unconstrained with respect to . The gradient is given by . The iteration is given in Algorithm 1.

When restricted to normalized measures, the relaxed problem (8) approximates smoothed transport (6). Figure 3a shows, for normalized and , the relative distance between the values of (8) and (6) 333In figures 3a-c, , and are generated as described in [18] section 5. In 3a-b, and have dimension . In 3c, convergence is defined as in [18]. Shaded regions are intervals.. For large enough, (8) converges to (6) as and increase.

(8) also retains two properties of smoothed transport (6). Figure 3b shows that, for normalized outputs, the relaxed loss converges to the unregularized Wasserstein distance as , and increase 444The unregularized Wasserstein distance was computed using FastEMD [21].. And Figure 3c shows that convergence of the iterations in (4.2) is nearly independent of the dimension of the output space.

5 Statistical Properties of the Wasserstein loss

Let be i.i.d. samples and be the empirical risk minimizer

Further assume is the composition of a softmax and a base hypothesis space of functions mapping into

. The softmax layer outputs a prediction that lies in the simplex

.

Theorem 5.1.

For , and any , with probability at least , it holds that

(9)

with the constant . is the Rademacher complexity [22] measuring the complexity of the hypothesis space .

The Rademacher complexity

for commonly used models like neural networks and kernel machines

[22] decays with the training set size. This theorem guarantees that the expected Wasserstein loss of the empirical risk minimizer approaches the best achievable loss for .

As an important special case, minimizing the empirical risk with Wasserstein loss is also good for multiclass classification. Let

be the “one-hot” encoded label vector for the groundtruth class.

Proposition 5.2.

In the multiclass classification setting, for and any , with probability at least , it holds that

(10)

where the predictor is , with being the empirical risk minimizer.

Note that instead of the classification error , we actually get a bound on the expected semantic distance between the prediction and the groundtruth.

6 Empirical study

6.1 Impact of the ground metric

In this section, we show that the Wasserstein loss encourages smoothness with respect to an artificial metric on the MNIST handwritten digit dataset. This is a multi-class classification problem with output dimensions corresponding to the 10 digits, and we apply a ground metric , where and . This metric encourages the recognized digit to be numerically close to the true one. We train a model independently for each value of and plot the average predicted probabilities of the different digits on the test set in Figure 4.

p-th norm

Posterior Probability

0

1

2

3

(a) Posterior predictions for images of digit 0.

p-th norm

Posterior Probability

2

3

4

5

6

(b) Posterior predictions for images of digit 4.
Figure 4: MNIST example. Each curve shows the predicted probability for one digit, for models trained with different values for the ground metric.

Note that as , the metric approaches the metric , which treats all incorrect digits as being equally unfavorable. In this case, as can be seen in the figure, the predicted probability of the true digit goes to 1 while the probability for all other digits goes to 0. As

increases, the predictions become more evenly distributed over the neighboring digits, converging to a uniform distribution as

555To avoid numerical issues, we scale down the ground metric such that all of the distance values are in the interval ..

6.2 Flickr tag prediction

We apply the Wasserstein loss to a real world multi-label learning problem, using the recently released Yahoo/Flickr Creative Commons 100M dataset [23]. 666The dataset used here is available at http://cbcl.mit.edu/wasserstein. Our goal is tag prediction: we select 1000 descriptive tags along with two random sets of 10,000 images each, associated with these tags, for training and testing. We derive a distance metric between tags by using word2vec [24] to embed the tags as unit vectors, then taking their Euclidean distances. To extract image features we use MatConvNet [25]. Note that the set of tags is highly redundant and often many semantically equivalent or similar tags can apply to an image. The images are also partially tagged, as different users may prefer different tags. We therefore measure the prediction performance by the top-K cost, defined as , where is the set of groundtruth tags, and are the tags with highest predicted probability. The standard AUC measure is also reported.

We find that a linear combination of the Wasserstein loss and the standard multiclass logistic loss yields the best prediction results. Specifically, we train a linear model by minimizing on the training set, where controls the relative weight of . Note that taken alone is our baseline in these experiments. Figure 5a shows the top-K cost on the test set for the combined loss and the baseline loss. We additionally create a second dataset by removing redundant labels from the original dataset: this simulates the potentially more difficult case in which a single user tags each image, by selecting one tag to apply from amongst each cluster of applicable, semantically similar tags. Figure 3b shows that performance for both algorithms decreases on the harder dataset, while the combined Wasserstein loss continues to outperform the baseline.

K (# of proposed tags)

top-K Cost

Loss Function

Divergence

Wasserstein (=0.5)

Wasserstein (=0.3)

Wasserstein (=0.1)

(a) Original Flickr tags dataset.

K (# of proposed tags)

top-K Cost

Loss Function

Divergence

Wasserstein (=0.5)

Wasserstein (=0.3)

Wasserstein (=0.1)

(b) Reduced-redundancy Flickr tags dataset.
Figure 5: Top-K cost comparison of the proposed loss (Wasserstein) and the baseline (Divergence).

In Figure 6, we show the effect on performance of varying the weight on the KL loss. We observe that the optimum of the top- cost is achieved when the Wasserstein loss is weighted more heavily than at the optimum of the AUC. This is consistent with a semantic smoothing effect of Wasserstein, which during training will favor mispredictions that are semantically similar to the ground truth, sometimes at the cost of lower AUC 777The Wasserstein loss can achieve a similar trade-off by choosing the metric parameter , as discussed in Section 6.1. However, the relationship between and the smoothing behavior is complex and it can be simpler to implement the trade-off by combining with the loss.. We finally show two selected images from the test set in Figure 7. These illustrate cases in which both algorithms make predictions that are semantically relevant, despite overlapping very little with the ground truth. The image on the left shows errors made by both algorithms. More examples can be found in the appendix.

Top-K cost

K = 1

K = 2

K = 3

K = 4

AUC

Wasserstein AUC

Divergence AUC

(a) Original Flickr tags dataset.

Top-K cost

K = 1

K = 2

K = 3

K = 4

AUC

Wasserstein AUC

Divergence AUC

(b) Reduced-redundancy Flickr tags dataset.
Figure 6: Trade-off between semantic smoothness and maximum likelihood.
(a) Flickr user tags: street, parade, dragon; our proposals: people, protest, parade; baseline proposals: music, car, band.
(b) Flickr user tags: water, boat, reflection, sunshine; our proposals: water, river, lake, summer; baseline proposals: river, water, club, nature.
Figure 7: Examples of images in the Flickr dataset. We show the groundtruth tags and as well as tags proposed by our algorithm and the baseline.

7 Conclusions and future work

In this paper we have described a loss function for learning to predict a non-negative measure over a finite set, based on the Wasserstein distance. Although optimizing with respect to the exact Wasserstein loss is computationally costly, an approximation based on entropic regularization is efficiently computed. We described a learning algorithm based on this regularization and we proposed a novel extension of the regularized loss to unnormalized measures that preserves its efficiency. We also described a statistical learning bound for the loss. The Wasserstein loss can encourage smoothness of the predictions with respect to a chosen metric on the output space, and we demonstrated this property on a real-data tag prediction problem, showing improved performance over a baseline that doesn’t incorporate the metric.

An interesting direction for future work may be to explore the connection between the Wasserstein loss and Markov random fields, as the latter are often used to encourage smoothness of predictions, via inference at prediction time.

References

  • [1] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. CVPR (to appear), 2015.
  • [2] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge.

    International Journal of Computer Vision (IJCV)

    , 2015.
  • [3] Marco Cuturi and Arnaud Doucet. Fast Computation of Wasserstein Barycenters. ICML, 2014.
  • [4] Justin Solomon, Raif M Rustamov, Leonidas J Guibas, and Adrian Butscher.

    Wasserstein Propagation for Semi-Supervised Learning.

    In ICML, pages 306–314, 2014.
  • [5] Michael H Coen, M Hidayath Ansari, and Nathanael Fillmore. Comparing Clusterings in Space. ICML, pages 231–238, 2010.
  • [6] Lorenzo Rosasco Mauricio A. Alvarez and Neil D. Lawrence. Kernels for vector-valued functions: A review. Foundations and Trends in Machine Learning, 4(3):195–266, 2011.
  • [7] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992.
  • [8] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. In ICLR, 2015.
  • [9] Marco Cuturi, Gabriel Peyré, and Antoine Rolet. A Smoothed Dual Approach for Variational Wasserstein Problems. arXiv.org, March 2015.
  • [10] Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. The earth mover’s distance as a metric for image retrieval. IJCV, 40(2):99–121, 2000.
  • [11] Kristen Grauman and Trevor Darrell. Fast contour matching using approximate earth mover’s distance. In CVPR, 2004.
  • [12] S Shirdhonkar and D W Jacobs. Approximate earth mover’s distance in linear time. In CVPR, 2008.
  • [13] Herbert Edelsbrunner and Dmitriy Morozov. Persistent homology: Theory and practice. In Proceedings of the European Congress of Mathematics, 2012.
  • [14] Federico Bassetti, Antonella Bodini, and Eugenio Regazzini. On minimum kantorovich distance estimators. Stat. Probab. Lett., 76(12):1298–1302, 1 July 2006.
  • [15] Cédric Villani. Optimal Transport: Old and New. Springer Berlin Heidelberg, 2008.
  • [16] Vladimir I Bogachev and Aleksandr V Kolesnikov. The Monge-Kantorovich problem: achievements, connections, and perspectives. Russian Math. Surveys, 67(5):785, 10 2012.
  • [17] Dimitris Bertsimas, John N. Tsitsiklis, and John Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, Boston, third printing edition, 1997.
  • [18] Marco Cuturi. Sinkhorn Distances: Lightspeed Computation of Optimal Transport. NIPS, 2013.
  • [19] Philip A Knight and Daniel Ruiz. A fast algorithm for matrix balancing. IMA Journal of Numerical Analysis, 33(3):drs019–1047, October 2012.
  • [20] Lenaic Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Unbalanced Optimal Transport: Geometry and Kantorovich Formulation. arXiv.org, August 2015.
  • [21] Ofir Pele and Michael Werman. Fast and robust Earth Mover’s Distances. ICCV, pages 460–467, 2009.
  • [22] Peter L Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. JMLR, 3:463–482, March 2003.
  • [23] Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. The new data and new challenges in multimedia research. arXiv preprint arXiv:1503.01817, 2015.
  • [24] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013.
  • [25] A. Vedaldi and K. Lenc. MatConvNet – Convolutional Neural Networks for MATLAB. CoRR, abs/1412.4564, 2014.
  • [26] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Classics in Mathematics. Springer Berlin Heidelberg, 2011.
  • [27] Clark R. Givens and Rae Michael Shortt. A class of wasserstein metrics for probability distributions. Michigan Math. J., 31(2):231–240, 1984.

Appendix A Relaxed transport

Equation (8) gives the relaxed transport objective as

with .

Proof of Proposition 4.1.

The first order condition for optimizing (8) is

Hence (if it exists) is a diagonal scaling of .

Proof of Proposition 4.2.

Let and , so . We have

where we substituted the expression for . Re-writing ,

A symmetric argument shows that . ∎

Appendix B Statistical Learning Bounds

We establish the proof of Theorem 5.1 in this section. For simpler notation, for a sequence of i.i.d. training samples, we denote the empirical risk and risk as

(11)
Lemma B.1.

Let be the minimizer of the empirical risk and expected risk , respectively. Then

Proof.

By the optimality of for ,

Therefore, to bound the risk for , we need to establish uniform concentration bounds for the Wasserstein loss. Towards that goal, we define a space of loss functions induced by the hypothesis space as

(12)

The uniform concentration will depends on the “complexity” of , which is measured by the empirical Rademacher complexity defined below.

Definition B.2 (Rademacher Complexity [22]).

Let be a family of mapping from to , and a fixed sample from . The empirical Rademacher complexity of with respect to is defined as