1 Introduction
The importance of graphs has recently increased in various tasks in different application domains, such as molecules modeling, brain connectivity analysis, or social network inference. Even if this development is partially fostered by powerful mathematical tools to model structural data, important questions are still largely open. In particular, it remains challenging to align, classify, predict or cluster graphs, since the notion of similarity between graphs is not straightforward. In many cases (e.g., dynamically changing graphs, multilayer graphs, etc…), even a consistent enumeration of the vertices cannot be trivially chosen for all graphs under study.
When two graphs are not aligned a priori
, graph matching must be performed prior to any comparison, leading to the challenging problem of estimating an unknown assignment between their vertices. Since this problem is NPhard, there exist several relaxations that can be solved by minimizing a suitable distance between graphs under the quadratic assignment model, such as the
norm between the graph adjacency matrices (Yu et al., 2018), or the GromowWasserstein distance (Xu et al., 2019). However, these approaches may yield solutions that are unable to capture the importance of edges with respect to the overall structure of the graph. An alternative that seems more appropriate for graph comparison is based on the Wasserstein distance between the graph signal distributions (Petric Maretic et al., 2019), but it is currently limited to graphs of the same size.In this paper, we consider the challenging alignment problem for graphs of different sizes. In particular, we build on (Petric Maretic et al., 2019) and formulate graph matching as a onetomany softassignment problem, where we consider the Wasserstein distance to measure the goodness of graph alignment in a structurally meaningful way. To accommodate for the nonconvexity of the problem, we propose a stochastic formulation based on a novel Dykstra operator to implicitly ensure that the solution is a onetomany softassignment matrix. This allows us to devise an efficient algorithm based on stochastic gradient descent, which naturally integrates Bayesian exploration in the optimization process, so as to help finding better local minima. We illustrate the benefits of our new graph comparison framework in representative tasks such as graph alignment and graph classification on synthetic and real datasets. Our results show that the Wasserstein distance combined with the onetomany graph assignment permits to outperform both GromovWasserstein and Euclidean distance in these tasks, suggesting that our approach outputs a structurally meaningful distance to efficiently align and compare graphs. These are important elements in graph analysis, comparison, or graph signal prediction tasks.
The paper is structured as follows. Section 3 presents the graph alignment problem with optimal transport, as well as the formulation of the onetomany assignment problem. Section 4 introduces our new Dykstra operator and proposes an algorithm for solving the resulting optimization problem via stochastic gradient descent. In Section 5, the performance of the proposed approach is assessed on synthetic and real data, and compared to different stateoftheart methods. Finally, Section 6 concludes the paper.
2 Related work
Numerous methods have been developed for graph alignment, whose goal is to match the vertices of two graphs such that the similarity of the resulting aligned graphs is maximized. This problem is typically formulated under the quadratic assignment model (Yan et al., 2016; Jiang et al., 2017), which is generally thought to be essential for obtaining a good matching, despite being NPhard. The main body of research in graph matching is thus focused on devising more accurate and/or faster algorithms to solve this problem approximately (Neuhaus et al., 2006).
In order to deal with the NPhardness of graph alignment, spectral clustering based approaches
(Caelli and Kosinov, 2004; Srinivasan et al., 2007) relax permutation matrices into semiorthogonal ones, at the price of a suboptimal matching accuracy. Alternatively, semidefinite programming can be used to relax the permutation matrices into semidefinite ones (Schellewald and Schnörr, 2005). Spectral properties have also been used to inspect graphs and define different classes of graphs for which convex relaxations are tight (Aflalo et al., 2015; Fiori and Sapiro, 2015; Dym et al., 2017). Based on the assumption that the space of doublystochastic matrices is a convex hull of the set of permutation matrices, the graph matching problem was relaxed into a nonconvex quadratic problem (Cho et al., 2010; Zhou and Torre, 2016). A related approach was recently proposed to approximate discrete graph matching in the continuous domain by using nonseparable functions (Yu et al., 2018). Along similar lines, a Gumbelsinkhorn network was proposed to infer permutations from data (Mena et al., 2018; Emami and Ranka, 2018) and align graphs with the Sinkhorn operator (Sinkhorn, 1964) to predict a soft permutation matrix.Closer to our framework, some recent works studied the graph alignment problem from an optimal transport perspective. Flamary et al. (Flamary et al., 2014) proposed a method to compute an optimal transportation plan by controlling the displacement of vertex pairs. Gu et al. (Gu et al., 2015)
defined a spectral distance by assigning a probability measure to the nodes via the spectrum representation of each graph, and by using Wasserstein distances between probability measures. This approach however does not take into account the full graph structure in the alignment problem. Later, Nikolentzos
et al. (Nikolentzos et al., 2017)proposed instead to use the Wasserstein distance for matching the graph embeddings represented as bags of vectors.
Another line of works looked at more specific graphs. Memoli (Mémoli, 2011) investigated the GromovWasserstein distance for object matching, Peyré et al. (Peyré et al., 2016) proposed an efficient algorithm to compute the GromovWasserstein distance and the barycenter of pairwise dissimilarity matrices, and (Xu et al., 2019) devised a scalable version of GromovWasserstein distance for graph matching and classification. More recently, Vayer et al. (Vayer et al., 2018) built on this work to propose a distance for graphs and signals living on them, which is a combination between the GromovWasserstein of graph distance matrices, and the Wasserstein distance of graph signals. However, while the above methods solve the alignment problem using optimal transport, the simple distances between aligned graphs do not take into account its global structure and the methods do not consider the transportation of signals between graphs.
3 Problem Formulation
Despite recent advances in the analysis of graph data, it stays challenging to define a meaningful distance between graphs. Even more, a major difficulty with graph representations is the lack of node alignment, which is necessary for direct quantitative comparisons between graphs. We propose to use the Wasserstein distance to compare graphs (Petric Maretic et al., 2019), since it has been shown to take into account global structural differences between graphs. Then, we formulate graph alignment as the problem of finding the assignment matrix that minimizes the distance between graphs of different sizes.
3.1 Preliminaries
Optimal transport
Let be the set of two arbitrary probability measures on two spaces . The Wasserstein distance^{1}^{1}1Wasserstein distance is also referred to as KantorovichMongeRubinstein distance. , arising from the Monge and Kantorovich optimal transport problem, can be defined as finding a map that minimizes
(1) 
where means that pushes forward the mass from to . Intuitively, can be seen as a function that preserves positivity and total mass, i.e., moving an entire probability mass on to an entire probability mass on . Equation (1) can be seen as the minimal cost needed to transport one probability measure to another with respect to a quadratic cost .
The Wasserstein distance between Gaussian distributions has an explicit expression in terms of their mean vectors and covariance matrices
and , respectively. With and , the above distance can be written as (Takatsu and others, 2011)(2) 
and the optimal map that takes to is
(3) 
Smooth graph signals
Let be a graph defined on a set of vertices, with (nonnegative) similarity edge weights. We denote by the weighted adjacency matrix of , and the diagonal matrix of vertex degree for all . The Laplacian matrix of is thus defined as .
We further assume that each vertex of the graph is associated with a scalar feature, forming a graph signal. We denote this graph signal as a vector . Following (Rue and Held, 2005)
, we interpret graphs as key elements that drive the probability distributions of signals, and thus we consider that a graph signal follows a normal distribution with zero mean and covariance matrix
(4) 
where denotes a pseudoinverse operator. The above formulation means that the graph signal varies slowly between strongly connected nodes (Dong et al., 2016). This assumption is verified for most common graph and network datasets. It is further used in many graph inference algorithms that implicitly represent a graph through its smooth signals (Dempster, 1972; Friedman et al., 2008; Dong et al., 2018)
. Furthermore, the smoothness assumption is used as regularization in many graph applications, such as robust principal component analysis
(Shahid et al., 2015) and label propagation (Zhu et al., 2003).3.2 Onetomany assignment problem
Assume that we are given two graphs and with the same number of nodes, and that we have knowledge of the onetoone mapping between their vertices.
Following (Petric Maretic et al., 2019), instead of comparing graphs directly, we look at their signal distributions, which are governed by the graphs. Specifically, we measure the dissimilarity between two aligned graphs and through the Wasserstein distance of the respective distributions and , which can be calculated explicitly as
(5) 
The advantage of this distance over more traditional graph distances (eg. , graph edit distance…) is that it takes into account the importance of an edge to the graph structure. This allows to better capture topological features in the distance metric. Another advantage is that the Wasserstein distance comes with a transport map that allows to transfer signals from one graph to the other. Hence, the mapping of signals over graphs yields
(6) 
which represents the signal , originally living on graph , adapted to the structure of graph .
The above Wasserstein distance requires the two graphs to be of the same size. However, we want to compare graphs of different sizes as well, which represents a common setting in practice. Throughout the rest of this work, we will consider two graphs and , and we arbitrarily pick as the graph with the smaller number of nodes.
We now compare graphs of different sizes by looking for the onetomany assignment between their vertices, similarly to (Zaslavskiy et al., 2010). This is illustrated in the toy example of Figure 1, where every vertex of the smaller graph is assigned to one or more vertices in the larger graph , and every vertex of is assigned to exactly one vertex in . Let be the maximum number of nodes in matched to a single node in . Such a onetomany assignment can be described by a matrix satisfying the constraints
(7) 
In words, the matrix only takes values zero or one, which corresponds to a hard assignment. Moreover, the sum of each matrix row has to be between and , ensuring that every vertex of is matched to at least one and at most vertices of . Finally, the sum of each matrix column has to be exactly one, so that every vertex of is matched to exactly one vertex of . To ensure that is a nonempty constraint set, we require that
(8) 
Given the true assignment matrix , the larger graph can be aligned to the smaller graph by transforming its Laplacian matrix as (Zaslavskiy et al., 2010), yielding an associated distribution of signals:
(9) 
The graph alignment with the onetomany assignment solution thus naturally leads to the use of of Equation (5) for evaluating the distance^{2}^{2}2It is not a distance in the theoretical sense. For brevity, we will use the term “distance” with an abuse of terminology. between graphs that originally have different sizes.
Of course, the true assignment matrix is often unknown beforehand. We are thus interested in estimating the best alignment, or equivalently in finding the assignment matrix that minimizes the distance between two graphs and , leading to the optimization problem
(10) 
The main difficulty in solving Problem (10) arises from the constraint defined in (7), since it leads to a discrete optimization problem with a factorial number of feasible solutions. To circumvent this issue, we propose a relaxation of the onetomany assignment problem in the next section.
4 Optimization algorithm
To deal with the nonconvexity of the alignment problem in Equation (10), we rely on two main ideas. Firstly, we relax the binary constraint into the unitary interval, so that becomes a softassignment matrix belonging to the set
(11) 
Secondly, we enforce the relaxed constraints implicitly using the Dykstra operator
(12) 
which transforms a rectangular matrix into a softassignment matrix, as explained in Section 4.1. This operator can be injected into the cost function to remove all the constraints, thus yielding the new unconstrained optimization problem
(13) 
Problem (13) is highly nonconvex, which may cause gradient descent to converge towards a local minimum. As we will see in Section 4.2, using the Dykstra operator will allow us to devise a stochastic formulation that can be efficiently solved with a variant of gradient descent integrating Bayesian exploration in the optimization process, possibly helping the algorithm to find better local minima.
4.1 Dykstra operator
Given a rectangular matrix and a small constant , the Dykstra operator normalizes the rows and columns of to obtain a onetomany assignment matrix, where a node in the smaller graph is matched to one or more (but at most ) nodes in the larger graph. It is defined as
(14) 
This operator can be efficiently computed by the Dykstra algorithm (Dykstra, 1983) with Bregman projections (Bauschke and Lewis, 2000). Indeed, Problem (14) can be written as a KullbackLeibler (KL) projection (Benamou et al., 2015)
(15) 
with
(16)  
The Dykstra algorithm starts by initializing
(17) 
and then iterates for every
(18)  
(19) 
where all operations are meant entrywise.^{3}^{3}3 denotes the entrywise (Hadamard) product of matrices. The KL projections are defined, for every , as follows
(20)  
(21) 
In the limit , the operator yields a onetomany assignment matrix. It is also differentiable (Luise et al., 2018), and can be thus used in a cost function optimized by gradient descent, as we will see in Section 4.2.
4.1.1 Connection to Sinkhorn
In the special case where the two graphs have the same size , the condition in (8) leads to , and thus reduces to the space of doublystochastic matrices. The Dykstra operator then reverts to a Sinkhorn operator (Sinkhorn, 1964; Cuturi, 2013; Genevay et al., 2018; Mena et al., 2018; Petric Maretic et al., 2019). Given a square matrix and a small constant , the Sinkhorn operator normalizes the rows and columns of
so as to obtain a doubly stochastic matrix. Formally, it is defined as
(22) 
where is the set of doubly stochastic matrices
(23) 
It is well known that the above operator can be computed with the following iterations
(24)  
In the limit , the operator yields a permutation matrix (Mena et al., 2018). It is also differentiable (Luise et al., 2018), and can be thus used in a cost function optimized by gradient descent, as we will see in Section 4.2.
4.2 Stochastic formulation
With help of the Dykstra operator, the cost function in Problem (13) becomes differentiable, and can be thus optimized by gradient descent. However, the nonconvex nature of the problem may cause gradient descent to converge towards a local minimum. Instead of directly solving Problem (13), we propose to optimize the expectation w.r.t. the parameters of some distribution , yielding
(25) 
The optimization of the expectation w.r.t. the parameters aims at shaping the distribution so as to put all its mass on a minimizer of the original cost function, thus integrating the use of Bayesian exploration in the optimization process, possibly helping the algorithm to find better local minima.
A standard choice for in continuous optimization is the multivariate normal distribution, leading to with and being matrices. By leveraging the reparameterization trick (Kingma and Welling, 2014; Figurnov et al., 2018), which boils down to setting
(26) 
The problem of Equation (25) can thus be reformulated as
(27) 
where
denotes the multivariate normal distribution with zero mean and unitary variance. The advantage of this reformulation is that the gradient of the above stochastic function can be approximated by sampling from the parameterless distribution
, yielding(28) 
The problem can be thus solved by stochastic gradient descent (Khan et al., 2017). Our approach is summarized in Algorithm 1.
Under mild assumptions, the algorithm converges almost surely to a critical point, which is not guaranteed to be the global minimum, as the problem is nonconvex. The computational complexity of a naive implementation is per iteration, due to the matrix squareroot operation, but faster options exist to approximate this operation (Lin and Maji, 2017). Moreover, the computation of pseudoinverses can be avoided by adding a small diagonal shift to the Laplacian matrices and directly computing the inverse matrices, which is orders of magnitude faster.
5 Experiments
We now analyse the performance of our new algorithm in two parts. Firstly, we assess the performance achieved by our approach for graph alignment and community detection in structured graphs, testing the preservation of both local and global graph properties. We investigate the influence of distance on alignment recovery and compare to methods using different definitions of graph distance for graph alignment. Secondly, we extend our analysis to graph classification, where we compare our approach with several stateoftheart methods.
Prior to running experiments, we determined the algorithmic parameters (in the Dykstra operator) and (step size in SGD) with grid search, while (sampling size) was fixed empirically. In all experiments, we set , and
. We set the maximal number of Dykstra iterations to 20, and we run stochastic gradient descent for 1000 iterations. As our algorithm seems robust to different initialisations, we used random initialization in all our experiments. The algorithm was implemented in PyTorch with AMSGrad method
(Reddi et al., 2018).5.1 Graph alignment and community detection
In this section, we test our proposed approach for graph alignment and recovery of communities in structured graphs. Namely, apart from the direct comparison of two graphs matrices, we evaluate the preservation of global properties by comparing the clustering of nodes into communities. We consider two experimental settings. In the first one (Figure 2), we generate a stochastic block model graph with 24 nodes and 4 communities. The graph is a noisy version of constructed by randomly collapsing edges, merging two connected nodes into one, until a target percentage of nodes is merged. We then generate a random permutation to change the order of the nodes in graph .
In the second experimental setting (Figure 3), the graph is again generated as a stochastic block model with four communities. For each , six graphs are created as random instances of stochastic block model graphs with the same number of communities, but with a different number of vertices and edges. Apart from the number of communities, there is no direct connection between and .
We investigate the influence of a distance metric on alignment recovery. We compare three different methods for graph alignment, namely the proposed method based on the Wasserstein distance between graphs, the proposed stochastic algorithm with the Euclidean distance (), and the stateoftheart GromovWasserstein distance (Peyré et al., 2016) for graphs (GW), using the Euclidean distance between shortest path matrices, as proposed in (Vayer et al., 2018). We repeat each experiment 50 times, after adjusting parameters for all compared methods, and show the results in Figures 2 and 3.
We now evaluate the structure recovery of the communitybased models through spectral clustering. Namely, after alignment estimation, we cluster the nodes in both graphs. A good alignment should detect and preserve communities, keeping the nodes in the same clusters, close to their original neighbours, even when the exact neighbours are not recovered. We evaluate the quality of community recovery with normalized mutual information (NMI) between the clusters in the original graph and the recovered clusters. We further evaluate the alignment quality by checking the difference between the two graphs in terms of the norm. While it is not the best possible distance measure for graphs, it is used here as a complementary measure to the NMI, not taking any special structural information into account. It can also be seen as an unbiased metric to compare the two methods performing the best in terms of NMI.
As shown in Figure 2, the proposed approach manages to capture the structural information and outperform methods based on different distance metrics, especially under large perturbations. In Figure 3, we observe an increase in performance in terms of NMI for both and . The emergence of this phenomenon despite the growing size difference between compared graphs suggests our assignment matrix has the ability to fuse nodes into meaningful groups, forming well defined clusters.
5.2 Graph classification
We now tackle the task of graph classification on two different datasets: PTC (Kriege et al., 2016) and IMDBB (Yanardag and Vishwanathan, 2015). We randomnly sample 100 graphs from each dataset. The graphs have a different number of nodes and edges. We use to align graphs and compute graph distances, and eventually use a simple nonparametric 1NN classification algorithm to classify graphs. We compare the classification performance with methods where the same 1NN classifier is used with different stateoftheart methods for graph alignment: GW (Peyré et al., 2016; Vayer et al., 2018), GA (Gold and Rangarajan, 1996), IPFP (Leordeanu et al., 2009), RRWM (Cho et al., 2010), NetLSD (Tsitsulin et al., 2018), and the proposed stochastic algorithm with the Euclidean distance () instead of the Wasserstein distance in Eq. (25) . We present the accuracy scores in Table 1, where the classification with the proposed clearly outperforms the other methods in terms of general accuracy. Furthermore, we analyse the performance of , GW and on several examples from the two datasets.
Dataset  GA  IPFP  RRWM  GW  NetLSD  

IMDBB  56.72  55.22  61.19  54.54  53.73  54.54  63.63 
PTC  50.75  52.24  49.25  56.71  52.23  47.76  61.19 
PTC dataset
PTC dataset contains the molecular structure of the NTP dataset. Figure 4 presents a set of graph examples from two different classes (0 and 1). In the first example (first row), outperforms both and in separating the two classes. The distinguishing feature between and is the number of nodes that forms the ring, which has been captured by , thanks to the soft permutation applied to the larger graph ().
The second example shows in a very intuitive way how and GW are able to capture structural similarities in graphs, even when those largely vary in size. This is especially clear when comparing the almost two times larger , and , , with structurally very similar and , and an easytoimagine assignment of one node in the graph to several nodes in the graph . However, it is not always as simple to understand the similarities. The third row shows an example in which all the three methods fail to find structural similarities with graphs in the same class.
IMDBB dataset
IMDBB dataset contains two classes: Comedy and sciencefiction movies, with several examples shown in Figure 5. The striking difference between example 2 and 3 shows that, while taking into account the global graph structure can be crucial in distinguishing some samples (second row), it remains a challenging dataset with very similar graphs often belonging to different clusters (third row). This possibly explains the low accuracy across all examined methods. However, example 1 shows the high flexibility of the assignment matrix proposed in our algorithm, where the onetomany assignment is able to detect that graph is very close to a graph with 2 communities, even if it technically has 3. This combination of putting emphasis on structural information, and allowing for flexibility might be the reason why still manages to outperform the other investigated methods.
6 Conclusion
In this paper, we have proposed a new method to align graphs of different sizes. Equipped with an optimal transport based approach to compute the distance between two smooth graph distributions associated to each graph, we have formulated a new onetomany alignment problem to find a soft assignment matrix that minimizes the “mass” transportation from a fixed distribution to a permuted and partially merged distribution. The resulting nonconvex optimization problem is solved efficiently with a novel stochastic gradient descent algorithm. It allows us to align and compare graphs, and it outputs a structurally meaningful distance. We have shown the performance of the proposed method in the context of graph alignment and graph classification. Our results show that the proposed algorithm outperforms stateoftheart alignment methods for structured graphs.
References
 On convex relaxation of graph isomorphism. Proceedings of the National Academy of Sciences 112 (10), pp. 2942–2947. Cited by: §2.
 Dykstras algorithm with bregman projections: a convergence proof. Optimization 48 (4), pp. 409–427. Cited by: §4.1.
 Iterative bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing 37 (2), pp. A1111–A1138. Cited by: §4.1.

An eigenspace projection clustering method for inexact graph matching
. IEEE transactions on Pattern Analysis and Machine Intelligence 26 (4), pp. 515–519. Cited by: §2. 
Reweighted random walks for graph matching.
In
European conference on Computer vision
, pp. 492–505. Cited by: §2, §5.2.  Sinkhorn distances: lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems, C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger (Eds.), pp. 2292–2300. Cited by: §4.1.1.
 Covariance selection. Biometrics, pp. 157–175. Cited by: §3.1.
 Learning laplacian matrix in smooth graph signal representations. IEEE Transactions on Signal Processing 64 (23), pp. 6160–6173. Cited by: §3.1.
 Learning graphs from data: a signal representation perspective. Preprint arXiv:1806.00848. Cited by: §3.1.
 An algorithm for restricted least squares regression. Journal of the American Statistical Association 78 (384), pp. 837–842. Cited by: §4.1.
 DS++: a flexible, scalable and provably tight relaxation for matching problems. arXiv preprint arXiv:1705.06148. Cited by: §2.
 Learning permutations with sinkhorn policy gradient. Preprint arXiv:1805.07010. Cited by: §2.
 Implicit reparameterization gradients. In Advances in Neural Information Processing Systems 31, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (Eds.), pp. 441–452. Cited by: §4.2.
 On spectral properties for graph matching and graph isomorphism problems. Information and Inference: A Journal of the IMA 4 (1), pp. 63–76. Cited by: §2.
 Optimal transport with Laplacian regularization. In NIPS 2014, Workshop on Optimal Transport and Machine Learning, Montréal, Canada. Cited by: §2.
 Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 (3), pp. 432–441. Cited by: §3.1.

Learning generative models with sinkhorn divergences.
In
Proceedings of the TwentyFirst International Conference on Artificial Intelligence and Statistics
, A. Storkey and F. PerezCruz (Eds.), Proceedings of Machine Learning Research, Vol. 84, Playa Blanca, Lanzarote, Canary Islands, pp. 1608–1617. Cited by: §4.1.1.  A graduated assignment algorithm for graph matching. IEEE Transactions on pattern analysis and machine intelligence 18 (4), pp. 377–388. Cited by: §5.2.
 Spectral distances on graphs. Discrete Applied Mathematics 190191, pp. 56 – 74. Cited by: §2.
 Graph matching via multiplicative update algorithm. In Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.), pp. 3187–3195. Cited by: §2.
 Variational adaptivenewton method for explorative learning. Preprint arXiv:1711.05560. Cited by: §4.2.
 Autoencoding variational bayes. preprint arXiv:1312.6114. Cited by: §4.2.
 On valid optimal assignment kernels and applications to graph classification. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.), pp. 1623–1631. Cited by: §5.2.
 An integer projected fixed point method for graph matching and map inference. In Advances in Neural Information Processing Systems 22, Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta (Eds.), pp. 1114–1122. Cited by: §5.2.
 Improved bilinear pooling with CNNs.. In British Machine Vision Conference, London, UK. Cited by: §4.2.
 Differential properties of sinkhorn approximation for learning with wasserstein distance. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (Eds.), pp. 5859–5870. Cited by: §4.1.1, §4.1.
 Gromov–wasserstein distances and the metric approach to object matching. Foundations of computational mathematics 11 (4), pp. 417–487. Cited by: §2.
 Learning latent permutations with gumbelsinkhorn networks. In International Conference on Learning Representations, Cited by: §2, §4.1.1.
 Fast suboptimal algorithms for the computation of graph edit distance.. In SSPR/SPR, D. Yeung, J. T. Kwok, A. L. N. Fred, F. Roli, and D. de Ridder (Eds.), Lecture Notes in Computer Science, Vol. 4109, pp. 163–172. Cited by: §2.
 Matching node embeddings for graph similarity. In ThirtyFirst AAAI Conference on Artificial Intelligence, Cited by: §2.
 GOT: an optimal transport framework for graph comparison. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 13876–13887. Cited by: §1, §1, §3.2, §3, §4.1.1.
 Gromovwasserstein averaging of kernel and distance matrices. In International Conference on Machine Learning, M. F. Balcan and K. Q. Weinberger (Eds.), Proceedings of Machine Learning Research, Vol. 48, New York, New York, USA, pp. 2664–2672. Cited by: §2, §5.1, §5.2.
 On the convergence of adam and beyond. In International Conference on Learning Representations, External Links: Link Cited by: §5.
 Gaussian markov random fields: theory and applications. Chapman and Hall/CRC. Cited by: §3.1.

Probabilistic subgraph matching based on convex relaxation.
In
Energy Minimization Methods in Computer Vision and Pattern Recognition
, A. Rangarajan, B. Vemuri, and A. L. Yuille (Eds.), Berlin, Heidelberg, pp. 171–186. Cited by: §2.  Robust principal component analysis on graphs. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2812–2820. Cited by: §3.1.
 A relationship between arbitrary positive matrices and doubly stochastic matrices. The Annals of Mathematical Statistics 35 (2), pp. 876–879. Cited by: §2, §4.1.1.
 Balanced graph matching. In Advances in Neural Information Processing Systems, B. Schölkopf, J. C. Platt, and T. Hoffman (Eds.), pp. 313–320. Cited by: §2.
 Wasserstein geometry of gaussian measures. Osaka Journal of Mathematics 48 (4), pp. 1005–1026. Cited by: §3.1.
 Netlsd: hearing the shape of a graph. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2347–2356. Cited by: §5.2.
 Optimal transport for structured data. Preprint arXiv:1805.09114. Cited by: §2, §5.1, §5.2.
 Scalable gromovwasserstein learning for graph partitioning and matching. In Advances in Neural Information Processing Systems 32, H. Wallach, H. Larochelle, A. Beygelzimer, F. d’AlchéBuc, E. Fox, and R. Garnett (Eds.), pp. 3046–3056. Cited by: §1, §2.
 A short survey of recent advances in graph matching. In International Conference on Multimedia Retrieval, New York, NY, USA, pp. 167–174. Cited by: §2.
 Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, New York, NY, USA, pp. 1365–1374. Cited by: §5.2.
 Generalizing graph matching beyond quadratic assignment model. In Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. CesaBianchi, and R. Garnett (Eds.), pp. 853–863. Cited by: §1, §2.
 Manytomany graph matching: a continuous relaxation approach. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 515–530. Cited by: §3.2, §3.2.
 Factorized graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (9), pp. 1774–1789. Cited by: §2.
 Semisupervised learning using gaussian fields and harmonic functions. In International conference on Machine learning, pp. 912–919. Cited by: §3.1.
Comments
There are no comments yet.