1 Introduction
The Wasserstein distance has its roots in optimal transport (OT) theory (Villani, 2008)
and forms a metric between two probability measures. It has attracted abundant attention in data sciences and machine learning due to its convenient theoretical properties and applications on many domains
(Solomon et al., 2014; Frogner et al., 2015; Montavon et al., 2016; Kolouri et al., 2017; Courty et al., 2017; Peyré & Cuturi, 2018; Schmitz et al., 2018), especially in implicit generative modeling such as OTbased generative adversarial networks (GANs) and variational autoencoders (Arjovsky et al., 2017; Bousquet et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2018).While OT brings new perspectives and principled ways to formalize problems, the OTbased methods usually suffer from high computational complexity. The Wasserstein distance is often the computational bottleneck and it turns out that evaluating it between multidimensional measures is numerically intractable in general. This important computational burden is a major limiting factor in the application of OT distances to largescale data analysis. Recently, several numerical methods have been proposed to speedup the evaluation of the Wasserstein distance. For instance, entropic regularization techniques (Cuturi, 2013; Cuturi & Peyré, 2015; Solomon et al., 2015) provide a fast approximation to the Wasserstein distance by regularizing the original OT problem with an entropy term. The linear OT approach, (Wang et al., 2013; Kolouri et al., 2016a) further simplifies this computation for a given dataset by a linear approximation of pairwise distances with a functional defined on distances to a reference measure. Other notable contributions towards computational methods for OT include multiscale and sparse approximation approaches (Oberman & Ruan, 2015; Schmitzer, 2016), and Newtonbased schemes for semidiscrete OT (Lévy, 2015; Kitagawa et al., 2016).
There are some special favorable cases where solving the OT problem is easy and reasonably cheap. In particular, the Wasserstein distance for onedimensional probability densities has a closedform formula that can be efficiently approximated. This nice property motivates the use of the slicedWasserstein distance (Bonneel et al., 2015), an alternative OT distance obtained by computing infinitely many linear projections of the highdimensional distribution to onedimensional distributions and then computing the average of the Wasserstein distance between these onedimensional representations. While having similar theoretical properties (Bonnotte, 2013), the slicedWasserstein distance has significantly lower computational requirements than the classical Wasserstein distance. Therefore, it has recently attracted ample attention and successfully been applied to a variety of practical tasks (Bonneel et al., 2015; Kolouri et al., 2016b; Carriere et al., 2017; Karras et al., 2017; Şimşekli et al., 2018; Deshpande et al., 2018; Kolouri et al., 2018, 2019).
As we will detail in the next sections, the linear projection process used in the slicedWasserstein distance is closely related to the Radon transform, which is widely used in tomography (Radon, 1917; Helgason, 2011)
. In other words, the slicedWasserstein distance is calculated via linear slicing of the probability distributions. However, the linear nature of these projections does not guarantee an efficient evaluation of the slicedWasserstein distance: in very highdimensional settings, the data often lives in a thin manifold and the number of randomly chosen linear projections required to capture the structure of the data distribution grows very quickly
(Şimşekli et al., 2018). Reducing the number of required projections would thus result in a significant performance improvement in slicedWasserstein computations.Contributions. In this paper, we address the aforementioned computational issues of the slicedWasserstein distance and for the first time, we extend the linear slicing to nonlinear slicing of probability measures. Our main contributions are summarized as follows:

[leftmargin=*,topsep = 0pt, noitemsep]

Using the mathematics of the generalized Radon transform (Beylkin, 1984) we extend the definition of the slicedWasserstein distance to an entire class of distances, which we call the generalized slicedWasserstein (GSW) distance. We prove that replacing the linear projections with polynomial projections will still yield a valid distance metric and we then identify general conditions under which the GSW distance is a distance function.

We then show that, instead of using infinitely many projections as required by the GSW distance, we can still define a valid distance metric by using a single projection, as long as the projection gives the maximal distance in the projected space. We aptly call this distance the maxGSW distance. The maxGSW distance vastly reduces the computational cost induced by the projection operations; however, it comes with an additional cost since it requires optimization over the space of projectors.

Due to their inherent nonlinearity, the GSW and maxGSW distances are expected to capture the complex structure of highdimensional distributions by using much less projections, which will reduce the overall computational burden in a significant amount. We verify this fact in our experiments, where we illustrate the superior performance of the proposed distances in both synthetic and realdata settings.
2 Background
We review in this section the preliminary concepts and formulations needed to develop our framework, namely the Wasserstein distance, the Radon transform, the sliced Wasserstein distance and the maximum sliced Wasserstein distance. In what follows, we denote by the set of Borel probability measures with finite
’th moment defined on a given metric space
and by and probability measures defined onwith corresponding probability density functions
and , i.e. and .2.1 Wasserstein Distance
The Wasserstein distance, , between and is defined as the solution of the optimal mass transportation problem (Villani, 2008):
(1) 
where is the cost function, and is the set of all transportation plans such that:
Due to Brenier’s theorem (Brenier, 1991), for absolutely continuous probability measures and (with respect to the Lebesgue measure), the Wasserstein distance can be equivalently obtained from
(2) 
where and represents the pushforward of measure , characterized as
Note that in most engineering and computer science applications, is a compact subset of and is the Euclidean distance. By abuse of notation, we will use and interchangeably.
Onedimensional distributions: The case of onedimensional continuous probability measures is specifically interesting as the Wasserstein distance has a closedform solution. More precisely, for onedimensional probability measures, there exists a unique monotonically increasing transport map that pushes one measure to another. Let
be the cumulative distribution function (CDF) for
and define to be the CDF of . The optimal transport map is then uniquely defined as and, consequently, the Wasserstein distance has an analytical form given as follows:(3)  
where Eq. (3) results from the change of variable . Note that for empirical distributions, Eq. (3) is calculated by simply sorting the samples from the two distributions and calculating the average between the sorted samples. This requires only operations at best and at worst, where is the number of samples drawn from each distribution (see Kolouri et al. (2019) for more details). The closedform solution of the Wasserstein distance for onedimensional distributions is an attractive property that gives rise to the slicedWasserstein (SW) distance. Next, we review the Radon transform, which enables the definition of the SW distance. We also formulate an alternative OT distance called the maximum slicedWasserstein distance.
2.2 Radon Transform
The standard Radon transform, denoted by , maps a function , where
to the infinite set of its integrals over the hyperplanes of
and is defined as(4) 
for , where stands for the dimensional unit sphere, the onedimensional Dirac delta function, and the Euclidean innerproduct. Note that . Each hyperplane can be written as:
(5) 
which alternatively can be interpreted as a level set of the function defined as . For a fixed , the integrals over all hyperplanes orthogonal to define a continuous function which is a projection (or a slice) of .
The Radon transform is a linear bijection (Natterer, 1986; Helgason, 2011) and its inverse is defined as:
(6)  
where
is a onedimensional highpass filter with corresponding Fourier transform
, which appears due to the Fourier slice theorem (Helgason, 2011), and ‘’ is the convolution operator. The above definition of the inverse Radon transform is also known as the filtered backprojection method, which is extensively used in image reconstruction in the biomedical imaging community. Intuitively each onedimensional projection (or slice) is first filtered via a highpass filter and then smeared back into along to approximate . The summation of all smeared approximations then reconstructs . Note that in practice, acquiring an infinite number of projections is not feasible, therefore the integration in the filtered backprojection formulation is replaced with a finite summation over projections (i.e., a MonteCarlo approximation).[Gustavo] As with the other paper, I think this section does not clarify anything mathematical, and could probably be moved towards the end, close to a computational section. [Kimia] I agree with Gustavo’s comment: this paragraph is useful but at this stage of the paper, it might confuse the reader.
Radon transform of empirical PDFs: The Radon transform of simply follows Equation (4), where is a onedimensional marginal distribution of . However, in most machine learning applications we do not have access to the distribution but to a set of samples drawn from and denoted by
. In such scenarios, kernel density estimation can be used to approximate
from its samples:where is a density kernel such that (e.g., a Gaussian kernel). The Radon transform of can then be approximated by:
Note that certain density kernels have an analytical Radon transform. For instance, for , the Radon transform is . Similarly, for Gaussian kernels and , the Radon transform is . Moreover, given the highdimensional nature of the problem, estimating the density in requires a large number of samples. However, the projections of , , are onedimensional, therefore it may not be critical to have a large number of samples to estimate these onedimensional densities.
2.3 SlicedWasserstein and Maximum SlicedWasserstein Distances
The idea behind the sliced Wasserstein distance is to first, obtain a family of onedimensional representations for a higherdimensional probability distribution through linear projections (via the Radon transform), and then, calculate the distance between two input distributions as a functional on the Wasserstein distance of their onedimensional representations (i.e., the onedimensional marginal distributions). The sliced Wasserstein distance between and is then formally defined as:
(7) 
This is indeed a distance function as it satisfies positivedefiniteness, symmetry and the triangle inequality (Bonnotte, 2013; Kolouri et al., 2016b).
The computation of the SW distance requires an integration over the unit sphere in . In practice, this integration is approximated by using a simple Monte Carlo scheme that draws samples
from the uniform distribution on
and replaces the integral with a finitesample average:(8) 
The sliced Wasserstein distance has important practical implications: provided that and can be computed for any sample , then the SW distance is obtained by solving several onedimensional optimal transport problems, which have closedform solutions. It is especially useful when one only has access to samples of a highdimensional PDF and kernel density estimation is required to estimate : onedimensional kernel density estimation of PDF slices is a much simpler task compared to the direct estimation of from its samples. The downside is that as the dimensionality grows, one requires a larger number of projections to accurately estimate from . In short, if a reasonably smooth twodimensional distribution can be approximated using projections, then projections are required to approximate a similarly smooth dimensional distribution for .
To further clarify this, let and ,
, be two multivariate Gaussian densities with the identity matrix as the covariance matrix. Their projected representations are onedimensional Gaussian distributions of the form
and . It is therefore clear that achieves its maximum value when and is zero for ’s that are orthogonal to. On the other hand, we know that vectors that are randomly picked from the unit sphere are more likely to be nearly orthogonal in highdimension. More rigorously, the following inequality holds:
, which implies that for a high dimension , the majority of sampled ’s would be nearly orthogonal to and therefore, with high probability.To remedy this issue, one can avoid uniform sampling of the unit sphere, and pick samples ’s that contain discriminant information between and instead. This idea was for instance used in Deshpande et al. (2018), where the authors first calculate a linear discriminant subspace and then measure the empirical SW distance by setting the ’s to be the discriminant components of the subspace.
A similarly flavored but less heuristic approach is to use the maximum sliced
Wasserstein (maxSW) distance, which is an alternative OT metric defined as:(9) 
Given that is a distance, it is easy to show that max is also a distance: we will prove in Section 3.2 that the metric axioms hold for the maximum generalized slicedWasserstein distance, which contains the maxSW distance as a special case.
2.4 Generalized Radon Transform
[Kimia] Maybe we should move this subsection in section 3.
3 Generalized SlicedWasserstein Distances
We propose in this paper to extend the definition of the slicedWasserstein distance to formulate a new optimal transport metric, which we call the generalized slicedWasserstein (GSW) distance. The GSW distance is obtained using the same procedure as for the SW distance, except that here, the onedimensional representations are acquired through nonlinear projections. In this section, we first review the generalized Radon transform, which is used to project the highdimensional distributions, and we then formally define the class of GSW distances. We also extend the concept of maxSW distance to the class of maximum generalized slicedWasserstein (maxGSW) distances.
3.1 Generalized Radon Transform
The generalized Radon transform (GRT) extends the original idea of the classical Radon transform introduced by Radon (1917) from integration over hyperplanes of to integration over hypersurfaces, i.e. dimensional manifolds (Beylkin, 1984; Denisyuk, 1994; Ehrenpreis, 2003; Gel’fand et al., 1969; Kuchment, 2006; Homan & Zhou, 2017). The GRT has various applications, including Thermoacoustic Tomography, where the hypersurfaces are spheres, and Electrical Impedance Tomography, which requires integration over hyperbolic surfaces.
To formally define the GRT, we introduce a function defined on with . We say that is a defining function when it satisfies the four conditions below:
H 1.
is a realvalued function on
H 2.
is homogeneous of degree one in , i.e.,
H 3.
is nondegenerate in the sense that
H 4.
The mixed Hessian of is strictly positive, i.e.
Then, the GRT of is the integration of over hypersurfaces characterized by the level sets of , which are characterized by .
Let be a defining function. The generalized Radon transform of , denoted by , is then formally defined as:
(10) 
Note that the standard Radon transform is a special case of the GRT with . Figure 1 illustrates the slicing process for standard and generalized Radon transforms for the Half Moons dataset as input.
3.2 Generalized SlicedWasserstein and Maximum Generalized SlicedWasserstein Distances
Following the definition of the SW distance in Equation (7), we define the generalized sliced Wasserstein distance using the generalized Radon transform as:
(11) 
where is a compact set of feasible parameters for (e.g., for ).
The GSW distance can also suffer from the projection complexity issue described in Section 2.3; that is why we formulate the maximum generalized sliced Wasserstein distance, which generalizes the maxSW distance as defined in (9):
(12) 
Proposition 1.
The generalized sliced Wasserstein distance and the maximum generalized sliced Wasserstein distance are, indeed, distances over if and only if the generalized Radon transform is injective.
Proof.
The nonnegativity and symmetry are direct consequences of the fact that the Wasserstein distance is a metric (Villani, 2008): see supplementary material.
We prove the triangle inequality for and max. Let , and in . Since the Wasserstein distance satisfies the triangle inequality, we have, for all ,
Therefore, we can write:
(13) 
where inequality (13) follows from the application of the Minkowski inequality in . We conclude that satisfies the triangle inequality.
Let ; then,
So max also satisfies the triangle inequality.
Since for any , we have and max. Now, or max is equivalent to for almost all . Therefore, GSW and maxGSW are distances if and only if implies , i.e. the GRT is injective. ∎
Remark 1.
If the chosen generalized Radon transform is not injective, then we can only say that the GSW and maxGSW distances are pseudometrics: they still satisfy nonnegativity, symmetry, the triangle inequality, and and .
3.3 Injectivity of the Generalized Radon Transform
We have shown that the injectivity of the GRT is crucial for the GSW and maxGSW distances to be, indeed, distances between probability measures. Here, we enumerate some of the known defining functions that lead to injective GRTs.
The investigation of the sufficient and necessary conditions for showing the injectivity of GRTs is a longstanding topic (Beylkin, 1984; Homan & Zhou, 2017; Uhlmann, 2003; Ehrenpreis, 2003). The circular defining function, with and was shown to provide an injective GRT (Kuchment, 2006)
. More interestingly, homogeneous polynomials with an odd degree also yield an injective GRT
(Rouviere, 2015), i.e.where we use the multiindex notation , , and . Here, the summation iterates over all possible multiindices , such that , where denotes the degree of the polynomial and . The parameter set for homogeneous polynomials is then set to . We can observe that choosing reduces to the linear case , since the set of the multiindices with becomes and contains elements.
4 Numerical Implementation
In this section, we briefly review the numerical methods used to compute the GSW and maxGSW distances.
4.1 Generalized Radon Transforms of Empirical PDFs
In most machine learning applications, we do not have access to the distribution but to a set of samples drawn from , for which the empirical density is:
The GRT of the empirical density is then given by:
Moreover, for highdimensional problems, estimating in requires a large number of samples. However, the projections of , , are onedimensional and it may not be critical to have a large number of samples to estimate these onedimensional densities.
4.2 Numerical Implementation of GSW Distances
Let and be samples respectively drawn from and , and let be a defining function. Following the work of Kolouri et al. (2019), the Wasserstein distance between onedimensional distributions and can be calculated from sorting their samples and calculating the distance between the sorted samples. In other words, the GSW distance between and can be approximated from their samples as follows:
where and are the indices of sorted and . The procedure to approximate the GSW distance is summarized in Algorithm 1.
4.3 Numerical Implementation of maxGSW Distances
To compute the maxGSW distance, we perform an EMlike optimization scheme: (a) for a fixed , and are sorted to compute , (b) is updated with:
where refers to the ADAM optimizer (Kingma & Ba, 2014) and is the operator projecting onto . For instance, when , .
Remark 2.
Here, we find the optimal by optimizing the actual , as opposed to the heuristic approaches proposed in Deshpande et al. (2018) and Kolouri et al. (2019)
, where the pseudooptimal slice is found via perceptrons or penalized linear discriminant analysis
(Wang et al., 2011).Finally, once convergence is reached, the maxGSW distance is approximated with:
The whole procedure is summarized in Algorithm 2.
5 Experiments
5.1 Generalized SlicedWasserstein Flows
Our first experiment demonstrates the effects of the choice of the GSW distance in its purest form by considering the following problem: , where is a target distribution and
is the source distribution, which is initialized to be the normal distribution. The optimization is then solved iteratively via
We used 5 wellknown distributions as the target, namely the 25Gaussians, 8Gaussians, Swiss Roll, Half Moons and Circle distributions. We compare linear (i.e., SW distance), circular, homogeneous polynomial of degree 3 and homogeneous polynomial of degree 5 as defining functions. We used the exact same optimization scheme for all methods, with random projections, and measured the 2Wasserstein distance between and
at each iteration of the optimization (via solving a linear programming at each step). We repeated each experiment 100 times and reported the mean and standard deviation of the 2Wasserstein distance for all five target datasets in Figure
2. While the choice of the defining function is datadependent, one can see that the homogeneous polynomial of degree 3 is among the top two performers for all datasets.For clarity purposes, we chose to not report the max results for the same experiment in Figure 2. These results are included in the supplementary material.
5.2 Generative Modeling via AutoEncoders
We now demonstrate the application of the GSW and maxGSW distances in generative modeling. We specifically use the recently proposed SlicedWasserstein AutoEncoder (SWAE) (Kolouri et al., 2019) framework, which penalizes the distribution of the encoded data in the latent space of the autoencoder to follow a prior samplable distribution, . More precisely, let be i.i.d. samples from , and be the parametric encoder and decoder (e.g., CNNs) with parameters and , respectively. Then SWAE’s objective function (Kolouri et al., 2019) is defined as:
(14) 
where is the regularizer coefficient for matching the encoded distribution to . Here, we substitute the SW distance in Equation (14) with GSW and maxGSW distances. Specifically, we encode the MNIST dataset (LeCun et al., 1998) into the encoder’s latent space and enforce the distribution of the embedded data to follow a specific prior distribution, e.g. the Swiss Roll distribution as shown in Figure 3, while we simultaneously enforce the encoded features to be decodable to the original input images.
We ran the optimization in Equation (14) with GSW distances, which we denote as GSWAE, with linear, circular, and homogeneous polynomial of degree 3. At each iteration, we measured the 2Wasserstein distance between the embedded distribution and the prior distribution, , and also between the input distribution and the distribution of the reconstructed samples, . Each experiment was repeated times and the average 2Wasserstein distances are reported in Figure 4. The middle row in Figure 4 shows samples from and for , and the last row shows decoded random samples, for . Similar to the previous experiment, we see that the GSWAE with a polynomial defining function, captures the nonlinear geometry of the input samples better.
We also compare the performance of GSWAE and MaxGSWAE with those of SWAE and WAEGAN (Tolstikhin et al., 2018). In particular, we use the improved WassersteinGAN (Gulrajani et al., 2017), which is among the stateoftheart adversarial training methods, in the embedding space of the Wasserstein autoencoder (Tolstikhin et al., 2018). The adversary was chosen to be a multilayer perceptron. Similar to the previous experiments, we measured the 2Wasserstein distance between the input and output distributions as well as the latent and prior distributions. Each experiment was repeated times, and the average 2Wasserstein distances are reported in Figure 5. It can be seen that, while WAEGAN provides a better matching of distributions in the latent space, the results of maxGSWAE distances are on par with the WAEGAN. In addition, by comparing the distance between input and output distributions of the autoencoder, it seems that maxGSWAE provides a better objective function to train such networks.
6 Conclusion
We introduced a new family of optimal transport metrics for probability measures that generalizes the slicedWasserstein distance: while the latter is based on linear slicing of distributions, we propose to perform nonlinear slicing. We provided theoretical conditions that yield the generalized slicedWasserstein distance to be, indeed, a distance function, and we empirically demonstrated the superior performance of the GSW and maxGSW distances over the classical slicedWasserstein distance in various generative modeling applications. As future work, we plan to study the existing connection between adversarial training and maxGSW distances by showing the defining function for GRTs can be approximated with neural networks.
7 Acknowledgement
This work was partially supported by the United States Air Force and DARPA under Contract No. FA875018C0103. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA.
References
 Arjovsky et al. (2017) Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein GAN. arXiv preprint arXiv:1701.07875, 2017.
 Beylkin (1984) Beylkin, G. The inversion problem and applications of the generalized Radon transform. Communications on pure and applied mathematics, 37(5):579–599, 1984.
 Bonneel et al. (2015) Bonneel, N., Rabin, J., Peyré, G., and Pfister, H. Sliced and Radon Wasserstein barycenters of measures. Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015.
 Bonnotte (2013) Bonnotte, N. Unidimensional and evolution methods for optimal transportation. PhD thesis, Université Paris 11, France, 2013.
 Bousquet et al. (2017) Bousquet, O., Gelly, S., Tolstikhin, I., SimonGabriel, C.J., and Schoelkopf, B. From optimal transport to generative modeling: the VEGAN cookbook. arXiv preprint arXiv:1705.07642, 2017.
 Brenier (1991) Brenier, Y. Polar factorization and monotone rearrangement of vectorvalued functions. Communications on pure and applied mathematics, 44(4):375–417, 1991.
 Carriere et al. (2017) Carriere, M., Cuturi, M., and Oudot, S. Sliced Wasserstein kernel for persistence diagrams. In ICML 2017Thirtyfourth International Conference on Machine Learning, pp. 1–10, 2017.
 Courty et al. (2017) Courty, N., Flamary, R., Tuia, D., and Rakotomamonjy, A. Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 39(9):1853–1865, 2017.
 Cuturi (2013) Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, pp. 2292–2300, 2013.
 Cuturi & Peyré (2015) Cuturi, M. and Peyré, G. A smoothed dual approach for variational Wasserstein problems. SIAM Journal on Imaging Sciences, December 2015. URL https://hal.archivesouvertes.fr/hal01188954.
 Denisyuk (1994) Denisyuk, A. Inversion of the generalized Radon transform. Translations of the American Mathematical SocietySeries 2, 162:19–32, 1994.

Deshpande et al. (2018)
Deshpande, I., Zhang, Z., and Schwing, A.
Generative modeling using the sliced Wasserstein distance.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 3483–3491, 2018.  Ehrenpreis (2003) Ehrenpreis, L. The universality of the Radon transform. Oxford University Press on Demand, 2003.
 Frogner et al. (2015) Frogner, C., Zhang, C., Mobahi, H., Araya, M., and Poggio, T. A. Learning with a Wasserstein loss. In Advances in Neural Information Processing Systems, pp. 2053–2061, 2015.
 Gel’fand et al. (1969) Gel’fand, I. M., Graev, M. I., and Shapiro, Z. Y. Differential forms and integral geometry. Functional Analysis and its Applications, 3(2):101–114, 1969.
 Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of Wasserstein GANs. In Advances in Neural Information Processing Systems, pp. 5767–5777, 2017.
 Helgason (2011) Helgason, S. The Radon transform on Rn. In Integral Geometry and Radon Transforms, pp. 1–62. Springer, 2011.
 Homan & Zhou (2017) Homan, A. and Zhou, H. Injectivity and stability for a generic class of generalized Radon transforms. The Journal of Geometric Analysis, 27(2):1515–1529, 2017.
 Karras et al. (2017) Karras, T., Aila, T., Laine, S., and Lehtinen, J. Progressive growing of GANs for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
 Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
 Kitagawa et al. (2016) Kitagawa, J., Mérigot, Q., and Thibert, B. Convergence of a Newton algorithm for semidiscrete optimal transport. arXiv preprint arXiv:1603.05579, 2016.
 Kolouri et al. (2016a) Kolouri, S., Tosun, A. B., Ozolek, J. A., and Rohde, G. K. A continuous linear optimal transport approach for pattern analysis in image datasets. Pattern recognition, 51:453–462, 2016a.
 Kolouri et al. (2016b) Kolouri, S., Zou, Y., and Rohde, G. K. SlicedWasserstein kernels for probability distributions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4876–4884, 2016b.
 Kolouri et al. (2017) Kolouri, S., Park, S. R., Thorpe, M., Slepcev, D., and Rohde, G. K. Optimal mass transport: Signal processing and machinelearning applications. IEEE Signal Processing Magazine, 34(4):43–59, 2017.

Kolouri et al. (2018)
Kolouri, S., Rohde, G. K., and Hoffmann, H.
Sliced Wasserstein distance for learning gaussian mixture models.
In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.  Kolouri et al. (2019) Kolouri, S., Pope, P. E., Martin, C. E., and Rohde, G. K. Sliced Wasserstein autoencoders. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H1xaJn05FQ.
 Kuchment (2006) Kuchment, P. Generalized transforms of Radon type and their applications. In Proceedings of Symposia in Applied Mathematics, volume 63, pp. 67, 2006.
 LeCun et al. (1998) LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradientbased learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
 Lévy (2015) Lévy, B. A numerical algorithm for semidiscrete optimal transport in 3D. ESAIM Math. Model. Numer. Anal., 49(6):1693–1715, 2015. ISSN 0764583X. doi: 10.1051/m2an/2015055. URL http://dx.doi.org/10.1051/m2an/2015055.

Montavon et al. (2016)
Montavon, G., Müller, K.R., and Cuturi, M.
Wasserstein training of restricted Boltzmann machines.
In Advances in Neural Information Processing Systems, pp. 3718–3726, 2016.  Natterer (1986) Natterer, F. The mathematics of computerized tomography, volume 32. SIAM, 1986.
 Oberman & Ruan (2015) Oberman, A. M. and Ruan, Y. An efficient linear programming method for optimal transportation. arXiv preprint arXiv:1509.03668, 2015.

Paszke et al. (2017)
Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z.,
Desmaison, A., Antiga, L., and Lerer, A.
Automatic differentiation in pytorch.
In NIPSW, 2017.  Peyré & Cuturi (2018) Peyré, G. and Cuturi, M. Computational optimal transport. arXiv preprint arXiv:1803.00567, 2018.
 Radon (1917) Radon, J. Uber die bestimmug von funktionen durch ihre integralwerte laengs geweisser mannigfaltigkeiten. Berichte Saechsishe Acad. Wissenschaft. Math. Phys., Klass, 69:262, 1917.
 Rouviere (2015) Rouviere, F. Nonlinear Radon and Fourier Transforms. https://math.unice.fr/~frou/recherche/Nonlinear%20RadonW.pdf, 2015.
 Schmitz et al. (2018) Schmitz, M. A., Heitz, M., Bonneel, N., Ngole, F., Coeurjolly, D., Cuturi, M., Peyré, G., and Starck, J.L. Wasserstein dictionary learning: Optimal transportbased unsupervised nonlinear dictionary learning. SIAM Journal on Imaging Sciences, 11(1):643–678, 2018.
 Schmitzer (2016) Schmitzer, B. A sparse multiscale algorithm for dense optimal transport. Journal of Mathematical Imaging and Vision, 56(2):238–259, Oct 2016. ISSN 15737683. doi: 10.1007/s1085101606539. URL https://doi.org/10.1007/s1085101606539.
 Şimşekli et al. (2018) Şimşekli, U., Liutkus, A., Majewski, S., and Durmus, A. SlicedWasserstein flows: Nonparametric generative modeling via optimal transport and diffusions. arXiv preprint arXiv:1806.08141, 2018.

Solomon et al. (2014)
Solomon, J., Rustamov, R., Guibas, L., and Butscher, A.
Wasserstein propagation for semisupervised learning.
In International Conference on Machine Learning, pp. 306–314, 2014.  Solomon et al. (2015) Solomon, J., De Goes, F., Peyré, G., Cuturi, M., Butscher, A., Nguyen, A., Du, T., and Guibas, L. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (TOG), 34(4):66, 2015.
 Tolstikhin et al. (2018) Tolstikhin, I., Bousquet, O., Gelly, S., and Schoelkopf, B. Wasserstein autoencoders. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=HkL7n10b.
 Uhlmann (2003) Uhlmann, G. Inside out: inverse problems and applications, volume 47. Cambridge University Press, 2003.
 Villani (2008) Villani, C. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
 Wang et al. (2011) Wang, W., Mo, Y., Ozolek, J. A., and Rohde, G. K. Penalized Fisher discriminant analysis and its application to imagebased morphometry. Pattern recognition letters, 32(15):2128–2135, 2011.
 Wang et al. (2013) Wang, W., Slepčev, D., Basu, S., Ozolek, J. A., and Rohde, G. K. A linear optimal transportation framework for quantifying and visualizing variations in sets of images. International journal of computer vision, 101(2):254–269, 2013.
8 Supplementary material
9 Nonnegativity and Symmetry of the GSW and maxGSW Distances
We prove that the GSW and maxGSW distances satisfy nonnegativity and symmetry, using the fact that the Wasserstein distance is known to be a proper distance function (Villani, 2008). Let and be in .
9.1 Nonnegativity
We use the nonnegativity of the Wasserstein distance, i.e. for any , in , to show that the GSW and maxGSW distances are nonnegative as well:
where .
9.2 Symmetry
Since the Wasserstein distance is symmetric, we have . In particular, we can write for all :
(15) 
and,
(16) 
10 Additional Experimental Results
We include the results of maximum generalized slicedWasserstein flows on the five datasets used in the main paper, to accompany Figure 4 of our main paper: see Figure 6. It can be seen that the maxGSW distances, in the majority of cases, improve the performance of GSW. Here it should be noted that GSW distances are calculated based on 10 random projections, while maxGSW distances use only one projection by definition.
11 Implementation Details
The PyTorch (Paszke et al., 2017) implementation of our paper will be available here^{1}^{1}1https://github.com/…/GSW/. Here we clarify some of the implementation details used in our paper. First, the ‘critic iteration’ for the adversarial training, and the projection maximization for the maxGSW distances, were set to be equal to . For all optimizations, we used ADAM (Kingma & Ba, 2014) optimizer with learning rate and PyTorch’s default momentum parameters.
We used convolutional filters in both encoder and decoder architectures. Encoder architecture:
Decoder architecture:
The WGAN in WAEGAN uses an adversary network. Adversary’s architecture:
Comments
There are no comments yet.