Feature Robust Optimal Transport for High-dimensional Data

05/25/2020 ∙ by Mathis Petrovich, et al. ∙ 13

Optimal transport is a machine learning technique with applications including distribution comparison, feature selection, and generative adversarial networks. In this paper, we propose feature robust optimal transport (FROT) for high-dimensional data, which jointly solves feature selection and OT problems. Specifically, we aim to select important feature sets and use them to compute the transportation plan. The FROT problem can be formulated as a min–max optimization or a convex minimization problem. Then, we propose a Frank–Wolfe-based optimization algorithm, where the sub-problem can be accurately solved using the Sinkhorn algorithm. An advantage of FROT is that important features can be analytically determined. Furthermore, we propose using the FROT algorithm for feature selection and the layer selection problem in deep neural networks for semantic correspondence. By conducting synthetic and benchmark experiments, we demonstrate that the proposed method can determine important features. Additionally, we show that the FROT algorithm achieves a state-of-the-art performance in real-world semantic correspondence datasets.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Optimal transport (OT) is a machine learning technique with several applications in machine learning, computer vision, and natural language processing communities. The applications include the Wasserstein distance estimation

[26], domain adaptation [40], multi-task learning [17], barycenter estimation [5], semantic correspondence [21], feature matching [33], and photo album summarization [20].

The OT problem is extensively studied in the computer vision community as the earth mover’s distance (EMD) [32]. However, the computational cost of EMD is cubic and is computationally expensive. Recently, the entropic regularized EMD problem was proposed, where the problem can be solved by the Sinkhorn algorithm with quadratic cost [6]. Owing to the development of the Sinkhorn algorithm, researchers have replaced the EMD computation with its regularized counterparts.

More recently, a robust variant of the OT was proposed and used for divergence estimation [25]. In the robust OT framework, the transportation plan is computed with the discriminative subspace of the two data matrices and , where the subspace can be obtained by solving the dimensionality reduction problem. An advantage of the subspace robust approach is that it does not require prior information about the subspace. However, given prior information such as feature groups, we can consider a computationally efficient formulation. The computation of the subspacecan be computationally expensive if the dimensionality of data is high, for example .

One of the most common prior information is a feature group. Using a group feature is popular in feature selection problems and extensively studied in Group Lasso [41]. The key idea of Group Lasso is to pre-specify the group variables and select the set of group variables using the group norm (also known as the sum of norms). For example, if we use pre-trained neural network for a feature extractor and to compute OT using the features, we require a careful selection of important layers to compute OT. Specifically, each layer output is regarded as a grouped input. Therefore, using feature group as a prior is a natural setup and important for considering OT for deep neural networks (DNNs).

(a) OT on clean data.
(b) OT on noisy data.
(c) FROT on noisy data ().
Figure 1: Transportation plans between two synthetic distribution with

-dimensional vectors

, , where two-dimensional vectors and are true features and and are noisy features. (fig:syntetic_OT_data) OT between the distribution and is a reference. (fig:syntetic_OT_noise) OT between the distribution and . (fig:syntetic_FROT_noise) FROT transportation plan between the distribution and where true features and noisy features are grouped respectively.

This study proposes a feature selection variant of the optimal transport for high-dimensional data utilizing grouped feature prior information. Specifically, we propose a feature robust optimal transport (FROT) problem, where we select distinct group feature sets instead of determining its distinct subsets as proposed in [25]. We formulate the FROT problem as a min–max optimization problem and transform it to a convex optimization problem, where it can be accurately solved by the Frank–Wolfe algorithm [10, 16]. The FROT’s sub-problem can be accurately solved by the Sinkhorn algorithm [6]. An advantage of FROT is that we can obtain a globally optimal solution owing to its convexity. Moreover, we can determine the significance of the features after solving the FROT problem without any additional cost; this can aid in interpreting features. Therefore, the FROT formulation is suited for feature selection and layer selection in DNNs. Through synthetic experiments, we initially demonstrate that the proposed FROT can determine important groups (i.e., features) and is robust to noise dimensions (See Figure 1). Then, we use the FROT for high-dimensional feature selection problems. Furthermore, we applied the FROT to a semantic correspondence problem [21] and showed that the proposed algorithm improves semantic correspondence.

Contribution:

  • We propose a feature robust optimal transport (FROT) problem and derive a simple and efficient Frank–Wolfe based algorithm. Furthermore, we propose a feature robust Wasserstein distance (FRWD).

  • We apply FROT to the high-dimensional feature selection problem and show that FROT is consistent with the Wasserstein distance based feature selection algorithm with less computational cost than the original algorithm.

  • We used FROT for the layer selection problem in a semantic correspondence problem and showed that the proposed algorithm outperforms existing baseline algorithms.

2 Background

In this section, we briefly introduce the OT problem.

Optimal transport (OT): Given independent and identically distributed (i.i.d.) samples from a -dimensional distribution and i.i.d. samples from the -dimensional distribution . In the Kantorovich relaxation of OT, admissible couplings are defined by the set of transportation plan:

where is called the transportation plan, is the -dimensional vector whose elements are ones, and and are the weights. The OT problem between two discrete measures and is to determine the optimal transportation plan of the following problem:

(1)

where is a cost function. For example, the squared Euclidean distance is used, that is., . To solve the OT problem, Eq. (1

), (also known as the earth mover’s distance) using linear programming requires

computation, which is computationally expensive. To address this, the entropic-regularized optimal transport is used [6].

where is the regularization parameter and is the entropic regularization. If , the regularized OT problem reduces to the EMD problem. Owing to entropic regularization, the entropic regularized OT problem can be accurately solved using Sinkhorn iteration [6] with computational cost (See Algorithm 1).

Wasserstein distance: If the cost function is defined as with a distance function and , then we define the -Wasserstein distance of two discrete measures and as

3 Proposed Method

This study proposes a feature robust optimal transport. We assume that the vectors are grouped as and ,. Here, and are the dimensional vector, where . This setting is useful if we know the explicit group structure for the feature vectors a priori. In an application in -layer neural networks, we consider and as outputs of the th layer of the network. Specifically, for and , we consider each feature independently.

1:  Input:
2:  Initialize
3:  while  and not converge do
4:     
5:     
6:     
7:  end while
8:  return  
Algorithm 1 Sinkhorn algorithm.
1:  Input: , , , and .
2:  Initialize , compute .
3:  for  do
4:     
5:     
6:     with .
7:  end for
8:  return  
Algorithm 2 FROT with the Frank–Wolfe.

3.1 Feature Robust Optimal Transport (FROT)

The FROT formulation is given by

where

is the probability simplex.

The underlying concept of FROT is to estimate the transportation plan using the distinct groups with large distances between and . We note that determining transportation plan in non-distinct groups is difficult, because the data samples in and overlap. In contrast, in the distinct groups, and are different, and this aids determining an optimal transportation plan. This is an intrinsically similar idea to the subspace robust Wasserstein distance [25], that estimates the transportation plan at the discriminative subspace. In contrast, our approach selects the important groups. Therefore, FROT can be regarded as a feature selection variant of the vanilla OT problem Eq. (1), whereas the subspace robust one is the dimensionality reduction counterparts.

FROT with Frank–Wolfe: An alternative approach can be used to estimate FROT; we initially estimate and then . However, it can have an local optimal solution due to its non-convexity. Thus, we propose a convex optimization of FROT with Frank–Wolfe. Specifically, we introduce the entropic regularization for and rewrite the FROT as a function of . Therefore, we solve the following problem for :

where is the regularization parameter and is the entropic regularization for . An advantage of the entropic regularization is that the non-negative constraint is naturally satisfied and the entropic regularizer is a strong convex function.

Proposition 1

The optimal solution of the optimization problem

with a fixed admissible transportation plan , is given by

Using Proposition 1 together with the setting , , the global problem is equivalent to

This function is the soft-maximum of the transportation costs in each group. The regularization parameter controls how ”soft” the maximum is: if is small, is similar to the maximum whereas if is large, the function becomes smooth.

Proposition 2

is a convex function relative to .

The derived optimization problem is convex. Therefore, we can determine globally optimal solutions. We employ the Frank–Wolfe algorithm [10, 16], where we approximate by linear functions at and move towards the optimal solution in the convex set (See Algorithm 2).

The derivative of the loss function

at is given by

Then, we update the transportation plan by solving the EMD problem:

where . By the Frank–Wolfe algorithm, we can obtain the optimal solution. However, solving the EMD problem requires cubic computational cost that can be computationally expensive if and are large. To address this, we can solve the regularized OT problem.

We propose a -feature robust Wasserstein distance (-FRWD).

Proposition 3

For the distance function ,

is a distance for .

3.2 Application 1: Feature Selection

We considered and as sets of samples from classes and , respectively. An advantage of the FROT formulation is that we can determine the important features for each grouped features. The optimal important feature is given by

where . Finally, we selected top- features by the ranking . Hence, changes to a one-hot vector for small and for large .

3.3 Application 2: Semantic Correspondence

We applied our proposed FROT algorithm to semantic correspondence. The semantic correspondence is a problem that determines the matching of objects in two images. That is, given input image pairs , with common objects, we formulated the semantic correspondence problem to estimate the transportation plan from the key points in to that in , where this framework is proposed in [21]. In Figure 2, we show the overview of our proposed framework.

Cost matrix computation :

In our framework, we employed the pre-trained convolutional neural network to extract dense feature maps for each convolutional layer. The dense feature map of the

th layer output of the th image is given by

where and are the width and the height of the th image, respectively and is the dimension of th layer’s feature map. Note that because the dimension of dense feature map is different for each layer, we sample feature maps to the size of the st layer’s feature map size (i.e., ).

The th layer’s cost matrix for images and is given by

A potential problem of FROT is that the estimation significantly depends on the magnitude of the cost of each layer (also known as group). Hence, normalizing each cost matrix is important. Therefore, we normalized each feature vector by . Consequently, the cost matrix is given by . We can use distances such as distance.

Computation of and with staircase re-weighting: For semantic correspondence, setting and is important because semantic correspondence can be affected by background clutter. Therefore, we generated the class activation maps [42] for the source and target images and use as and , respectively. For CAM, we chose the class with the highest classification probability and normalized it to the range .

Figure 2: Proposed semantic correspondence framework based on FROT.

4 Related Work

In this section, we review divergence measures and optimal transport.

Divergence measure and optimal transport: Divergence measures can be categorized into two: -divergence [1] including the Kullback–Leibler (KL) divergence [4] and the -divergence [28, 27], and integral probability metric [24], such as the Wasserstein distance [35].

The KL divergence is a commonly used divergence. A naive approach for estimating the KL divergence between and is to estimate the probability densities and separately using some density estimators and then computing their ratio. However, density estimation is a difficult problem, and the KL divergence estimation can be inaccurate. An efficient approach can be based on density ratio estimation approaches, where we directly estimate the ratio of and without using the density estimations [34]. For the Jensen–Shannon divergence [11], we can use the relative density ratio estimation alternate to the standard density ratio estimation [39]. For non-overlapping distributions, the KL divergence can be infinite. Moreover, in this case, neural network training with KL and JS divergences can be affected by vanishing gradients.

To address the instability problem in KL and JS divergences, using a distance based approach is promising. The maximum mean discrepancy (MMD) [13] is a kernel based measure defined as a difference of means of two distributions in a reproducing kernel Hilbert space (RKHS), that can be accurately computed without optimization. Another type of distance based measure is the Wasserstein distance [26]. The Wasserstein distance can be determined by solving the OT problem. An advantage of the Wasserstein distance is its robustness to noise; moreover, we can obtain the transportation plan, which is useful for many machine learning applications. To reduce the Wasserstein distance computation cost, the sliced Wasserstein distance is useful [18]. Recently, the tree variant of Wasserstein was proposed [9, 19]; the sliced Wasserstein distance is a special case of this alogorithm.

In addition to accelerating the computation, structured optimal transport (SOT) incorporates structural information directly into the OT problems [alvarez2018structured]. Specifically, they formulate the submodular optimal transport problem and solve the problem by a saddle-point mirror prox algorithm. Recently, the more complex structured information is introduced in the OT problem such as hierarchical structure [alvarez2019unsupervised, yurochkin2019hierarchical]. These approaches successfully incorporate the structured information into the OT problems with respect to data samples. In contrast, FROT incorporates the structured information into features.

The most related work to FROT is that a robust variant of Wasserstein distance called the subspace robust Wasserstein distance [25]. The subspace robust Wasserstein distance method computes the OT problem in the extremely discriminative subspace, that can be determined by solving dimensionality reduction problems. Owing to the subspace robust Wasserstein, it can successfully compute the Wasserstein from noisy data. The FROT is a feature selection variant of Wasserstein distance, whereas the subspace robust one is for dimensionality reduction.

OT applications: OT has received significant attention in several computer vision tasks. Applications include the Wasserstein distance estimation [26], the domain adaptation [40], the multi-task learning [17], the barycenter estimation [5], the semantic correspondence [21], the feature matching [33], photo album summarization [20], generative model [2, 3, 8, 36], and graph matching [37, 38]. Recently, OT was applied to the semantic correspondence problem, and it outperformed existing state-of-the-art semantic correspondence algorithms [21].

5 Experiments

In this section, we initially evaluate the FROT algorithm using synthetic datasets. Then, we demonstrate the performance using feature selection and semantic correspondence tasks.

5.1 Synthetic Data

We compare FROT with a standard OT using synthetic datasets. In these experiments, we initially generate two-dimensional vectors and . Here, we set , , . Then, we concatenate and to and , respectively to give , .

For FROT, we set and the number of iterations of the Frank–Wolfe algorithm as . The regularization parameter is set to for all methods. To show the proof-of-concepts, we set the true features as a group and the remaining noise features as another group.

Fig. 0(a) shows the correspondence from and with the vanilla OT algorithm. Figs. 0(b) and 0(c) show the correspondence of FROT and OT with and , respectively. Although FROT can identify a good matching, the OT fails to obtain a significant correspondence. We observed that the parameter corresponding to true group is nearly one.

(a) Colon dataset.
(b) Leukemia dataset.
(c) Prostate_ge dataset.
(d) GLI_85 dataset.
Figure 3: Feature selection results. We average over 50 runs of the accuracy (on the test set) of an SVM trained with the top-k features selected by several methods.
Data Wasserstein Linear MMD FROT
Colon 2000 62 21.38 ( 4.09) 0.00 ( 0.00) 1.36 ( 0.15) 0.41 ( 0.07)
Leukemia 7070 72 79.86 ( 16.95) 0.01 ( 0.00) 5.03 ( 0.79) 1.13 ( 0.14)
Prostate_GE 5966 102 61.05 ( 13.67) 0.02 ( 0.00) 6.01 ( 1.17) 1.04 ( 0.11)
GLI_85 22283 85 426.24 ( 21.45) 0.04 ( 0.00) 23.6 ( 1.21) 3.44 ( 0.36)
Table 1: Computational time comparison (second) for feature selection from biological datasets.

5.2 Feature selection

Here, we compared FROT with several baseline algorithms in feature selection problems. In this study, we employed the high-dimensional and few sample datasets with two class classification tasks (see Table 1). All the feature selection experiments were run on a Linux server with Intel Xeon CPU E7-8890 v4 2.20 GHz and 2 TB RAM.

In our experiments, we initially randomly split the data into two sets ( for training and

for test) and used the training set for feature selection and building a classifier. Note that we standardized each feature using the training set. Then, we used the remaining set for the test. The trial was repeated

times and we reported the averaged classification accuracy. Considered as baseline methods, we computed the Wasserstein distance, the Maximum mean discrepancy (MMD) [12], and linear correlation111https://scikit-learn.org/stable/modules/feature_selection.html for each dimension and sorted them in descending order. Then, we selected the top- features as important features. For FROT, we computed the feature importance and selected the features that had significant importance score. In our experiments, we set and . Then, we trained 2-class SVM222https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html with the selected features.

Fig. 3 shows the averaged classification accuracy relative to the number of selected features. From Figure 3, FROT is consistent with the Wasserstein distance based feature selection, and outperforms the linear correlation method and the MMD for two datasets. Table 1 shows the computational time (second) of the methods. FROT is about two order of magnitude faster than that of Wasserstein distance and also faster than MMD. Note that although MMD is as fast as the proposed method, it cannot determine the correspondence between samples.

Methods aero bike bird boat bottle bus car cat chair cow dog horse moto person plant sheep train tv all
Authors’
original
models
CNNGeo [29] 21.3 15.1 34.6 12.8 31.2 26.3 24.0 30.6 11.6 24.3 20.4 12.2 19.7 15.6 14.3 9.6 28.5 28.8 18.1
A2Net [15] 20.8 17.1 37.4 13.9 33.6 29.4 26.5 34.9 12.0 26.5 22.5 13.3 21.3 20.0 16.9 11.5 28.9 31.6 20.1
WeakAlign [30] 23.4 17.0 41.6 14.6 37.6 28.1 26.6 32.6 12.6 27.9 23.0 13.6 21.3 22.2 17.9 10.9 31.5 34.8 21.1
NC-Net [31] 24.0 16.0 45.0 13.7 35.7 25.9 19.0 50.4 14.3 32.6 27.4 19.2 21.7 20.3 20.4 13.6 33.6 40.4 26.4
SPair-71k
finetuned
models
CNNGeo [29] 23.4 16.7 40.2 14.3 36.4 27.7 26.0 32.7 12.7 27.4 22.8 13.7 20.9 21.0 17.5 10.2 30.8 34.1 20.6
A2Net [15] 22.6 18.5 42.0 16.4 37.9 30.8 26.5 35.6 13.3 29.6 24.3 16.0 21.6 22.8 20.5 13.5 31.4 36.5 22.3
WeakAlign [30] 22.2 17.6 41.9 15.1 38.1 27.4 27.2 31.8 12.8 26.8 22.6 14.2 20.0 22.2 17.9 10.4 32.2 35.1 20.9
NC-Net [31] 17.9 12.2 32.1 11.7 29.0 19.9 16.1 39.2 9.9 23.9 18.8 15.7 17.4 15.9 14.8 9.6 24.2 31.1 20.1
SPair-71k
validation
HPF [22] 25.2 18.9 52.1 15.7 38.0 22.8 19.1 52.9 17.9 33.0 32.8 20.6 24.4 27.9 21.1 15.9 31.5 35.6 28.2
OT-HPF [21] 32.6 18.9 62.5 20.7 42.0 26.1 20.4 61.4 19.7 41.3 41.7 29.8 29.6 31.8 25.0 23.5 44.7 37.0 33.9
Without
SPair-71k
validation
OT 30.1 16.5 50.4 17.3 38.0 22.9 19.7 54.3 17.0 28.4 31.3 22.1 28.0 19.5 21.0 17.8 42.6 28.8 28.3
FROT () 35.0 20.9 56.3 23.4 40.7 27.2 21.9 62.0 17.5 38.8 36.2 27.9 28.0 30.4 26.9 23.1 49.7 38.4 33.7
FROT () 34.1 18.8 56.9 19.9 40.0 25.6 19.2 61.9 17.4 38.7 36.5 25.6 26.9 27.2 26.3 22.1 50.3 38.6 32.8
FROT () 33.4 19.4 56.6 20.0 39.6 26.1 19.1 62.4 17.9 38.0 36.5 26.0 27.5 26.5 25.5 21.6 49.7 38.9 32.7
FROT () 32.8 19.1 55.8 19.8 39.1 25.7 19.7 61.5 17.2 37.1 35.9 25.1 27.2 25.0 24.7 21.4 47.7 37.8 32.0
Table 2: Per-class PCK () results using the SPair-71k. All models use ResNet101 as the backbone.

5.3 Semantic correspondence

We evaluated our FROT algorithm for semantic correspondence. In this study, we used the SPair-71k [23]. The SPair-71k dataset consists of image pairs with variations in viewpoint and scale. For evaluation, we employed the percentage of accurate key-points (PCK), that counts the number of accurately predicted key-points given a fixed threshold [23]. All the semantic correspondence experiments were run on a Linux server with NVIDIA P100.

For the proposed framework, we employed ResNet101 [14]

that are pre-trained on ImageNet

[7] for feature and activation map extraction. Note that we did not fine-tune the network. We compared the proposed method to several baselines [23]. In particular, HPF [22] and OT-HPF [21] are state-of-the-art methods for semantic correspondence. The HPF and OT-HPF required the validation dataset to select important layers, whereas FROT did not require the validation dataset. The OT is a simple optimal transport based method without selecting layers.

Table 2 shows the per-class PCK results using the SPair-71k dataset. FROT outperforms most existing baselines including HPF and OT. Moreover, FROT is consistent with OT-HPF [21], which requires the validation dataset to select important layers. In this experiment, setting gives favorable performance. Figure 3(a) shows an example of the matched key-points using the FROT algorithm. Fig.3(b) shows the corresponding feature importance. The lower the value, the smaller number of layers used. The interesting finding here is that the selected important layer in this case is the third layer from the last.

(a) FROT ().
(b) Feature importance of FROT.
Figure 4: One to one matching results of FROT () and the feature importance of FROT.

6 Conclusion

In this paper, we proposed a feature robust optimal transport (FROT) for high-dimensional data, which jointly solves the feature selection and OT problems. An advantage of FROT is that it is a convex optimization problem and can determine an accurate globally optimal solution by the Frank–Wolfe algorithm. Then, we used FROT for high-dimensional feature selection and semantic correspondence problems. By extensive experiments, we demonstrated that the proposed algorithm is consistent with state-of-the-art algorithms in both feature selection and semantic correspondence.

References

  • [1] S. M. Ali and S. D. Silvey (1966) A general class of coefficients of divergence of one distribution from another. Journal of the Royal Statistical Society. Series B (Methodological), pp. 131–142. Cited by: §4.
  • [2] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In ICML, Cited by: §4.
  • [3] C. Bunne, D. Alvarez-Melis, A. Krause, and S. Jegelka (2019) Learning generative models across incomparable spaces. In ICML, Cited by: §4.
  • [4] T. M. Cover and J. A. Thomas (2012) Elements of information theory. John Wiley & Sons. Cited by: §4.
  • [5] M. Cuturi and A. Doucet (2014) Fast computation of wasserstein barycenters. ICML. Cited by: §1, §4.
  • [6] M. Cuturi (2013) Sinkhorn distances: lightspeed computation of optimal transport. In NIPS, Cited by: §1, §1, §2.
  • [7] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §5.3.
  • [8] I. Deshpande, Y. Hu, R. Sun, A. Pyrros, N. Siddiqui, S. Koyejo, Z. Zhao, D. Forsyth, and A. G. Schwing (2019) Max-sliced wasserstein distance and its use for gans. In CVPR, Cited by: §4.
  • [9] S. N. Evans and F. A. Matsen (2012) The phylogenetic kantorovich–rubinstein metric for environmental sequence samples. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 74 (3), pp. 569–592. Cited by: §4.
  • [10] M. Frank and P. Wolfe (1956) An algorithm for quadratic programming. Naval research logistics quarterly 3 (1-2), pp. 95–110. Cited by: §1, §3.1.
  • [11] B. Fuglede and F. Topsoe (2004) Jensen-shannon divergence and hilbert space embedding. In ISIT, Cited by: §4.
  • [12] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Schölkopf, and A. J. Smola (2007) A kernel statistical test of independence. In NIPS, Cited by: §5.2.
  • [13] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola (2012) A kernel two-sample test. Journal of Machine Learning Research 13 (Mar), pp. 723–773. Cited by: §4.
  • [14] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In CVPR, Cited by: §5.3.
  • [15] P. Hongsuck Seo, J. Lee, D. Jung, B. Han, and M. Cho (2018) Attentive semantic alignment with offset-aware correlation kernels. In ECCV, Cited by: Table 2.
  • [16] M. Jaggi (2013) Revisiting frank-wolfe: projection-free sparse convex optimization.. In ICML, Cited by: §1, §3.1.
  • [17] H. Janati, M. Cuturi, and A. Gramfort (2019) Wasserstein regularization for sparse multi-task regression. In AISTATS, Cited by: §1, §4.
  • [18] S. Kolouri, Y. Zou, and G. K. Rohde (2016)

    Sliced wasserstein kernels for probability distributions

    .
    In CVPR, Cited by: §4.
  • [19] T. Le, M. Yamada, K. Fukumizu, and M. Cuturi (2019) Tree-sliced approximation of wasserstein distances. NeurIPS. Cited by: §4.
  • [20] Y. Liu, M. Yamada, Y. H. Tsai, T. Le, R. Salakhutdinov, and Y. Yang (2019) LSMI-sinkhorn: semi-supervised squared-loss mutual information estimation with optimal transport. arXiv preprint arXiv:1909.02373. Cited by: §1, §4.
  • [21] Y. Liu, L. Zhu, M. Yamada, and Y. Yang (2020) Semantic correspondence as an optimal transport problem. In CVPR, Cited by: §1, §1, §3.3, §4, §5.3, §5.3, Table 2.
  • [22] J. Min, J. Lee, J. Ponce, and M. Cho (2019) Hyperpixel flow: semantic correspondence with multi-layer neural features. In ICCV, Cited by: §5.3, Table 2.
  • [23] J. Min, J. Lee, J. Ponce, and M. Cho (2019) SPair-71k: a large-scale benchmark for semantic correspondence. arXiv preprint arXiv:1908.10543. Cited by: §5.3, §5.3.
  • [24] A. Müller (1997) Integral probability metrics and their generating classes of functions. Advances in Applied Probability 29 (2), pp. 429–443. Cited by: §4.
  • [25] F. Paty and M. Cuturi (2019) Subspace robust wasserstein distances. In ICML, Cited by: §1, §1, §3.1, §4.
  • [26] G. Peyré, M. Cuturi, et al. (2019) Computational optimal transport. Foundations and Trends® in Machine Learning 11 (5-6), pp. 355–607. Cited by: Triangle inequality, §1, §4, §4.
  • [27] B. Póczos and J. Schneider (2011) On the estimation of alpha-divergences. In AISTATS, Cited by: §4.
  • [28] A. Rényi et al. (1961) On measures of entropy and information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, Cited by: §4.
  • [29] I. Rocco, R. Arandjelovic, and J. Sivic (2017) Convolutional neural network architecture for geometric matching. In CVPR, Cited by: Table 2.
  • [30] I. Rocco, R. Arandjelović, and J. Sivic (2018) End-to-end weakly-supervised semantic alignment. In CVPR, Cited by: Table 2.
  • [31] I. Rocco, M. Cimpoi, R. Arandjelović, A. Torii, T. Pajdla, and J. Sivic (2018) Neighbourhood consensus networks. In NeurIPS, Cited by: Table 2.
  • [32] Y. Rubner, C. Tomasi, and L. J. Guibas (2000)

    The earth mover’s distance as a metric for image retrieval

    .
    International journal of computer vision 40 (2), pp. 99–121. Cited by: §1.
  • [33] P. Sarlin, D. DeTone, T. Malisiewicz, and A. Rabinovich (2019) SuperGlue: learning feature matching with graph neural networks. arXiv preprint arXiv:1911.11763. Cited by: §1, §4.
  • [34] M. Sugiyama, S. Nakajima, H. Kashima, P. von Bünau, and M. Kawanabe (2008) Direct importance estimation with model selection and its application to covariate shift adaptation. In NIPS, Cited by: §4.
  • [35] C. Villani (2008) Optimal transport: old and new. Vol. 338, Springer Science & Business Media. Cited by: §4.
  • [36] J. Wu, Z. Huang, D. Acharya, W. Li, J. Thoma, D. P. Paudel, and L. V. Gool (2019) Sliced wasserstein generative models. In CVPR, Cited by: §4.
  • [37] H. Xu, D. Luo, and L. Carin (2019) Scalable gromov-wasserstein learning for graph partitioning and matching. arXiv preprint arXiv:1905.07645. Cited by: §4.
  • [38] H. Xu, D. Luo, H. Zha, and L. C. Duke (2019) Gromov-wasserstein learning for graph matching and node embedding. In ICML, Cited by: §4.
  • [39] M. Yamada, T. Suzuki, T. Kanamori, H. Hachiya, and M. Sugiyama (2013) Relative density-ratio estimation for robust distribution comparison. Neural computation 25 (5), pp. 1324–1370. Cited by: §4.
  • [40] Y. Yan, W. Li, H. Wu, H. Min, M. Tan, and Q. Wu (2018) Semi-supervised optimal transport for heterogeneous domain adaptation.. In IJCAI, Cited by: §1, §4.
  • [41] M. Yuan and Y. Lin (2006) Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68 (1), pp. 49–67. Cited by: §1.
  • [42] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    .
    In CVPR, Cited by: §3.3.

Appendix

Proof of Proposition 1

We optimize the function with respect to :

s.t.

where

(2)

Because the entropic regularization is a strong convex function and its negative counterpart is a strong concave function, the maximization problem is a concave optimization problem.

We consider the following objective function with the Lagrange multiplier :

Note that owing to the entropic regularization, the non-negative constraint is automatically satisfied.

Taking the derivative with respect to , we have

Thus, the optimal has the form:

satisfies the sum to one constraint.

Hence, the optimal is given by

Substituting in to Eq.(2), we have

Therefore, the final objective function is given by

Proof of Proposition 2

Proof: For , we have

Here, we use the Hölder’s inequality with , , and .

Applying logarithm on both sides of the equation, we have

Proof of Proposition 3

For the distance function , we prove that

is a distance for .

It is clear that is symmetric and .

Triangle inequality

Let , , and , we prove that

.

To simplify the notations in this proof, we define the distance ”matrix” such that is the th row and th column element of the matrix , and . Moreover, note that , the ”matrix” where each element is the element of raised to the power .

Consider the optimal transportation plan of and the optimal transportation plan of . Similarly to the proof for Wasserstein distance in [26], let. We can show that .

By letting and , the right-hand side of this inequality can be rewritten as

by the Minkovski inequality.