In many computer vision problems, data (e.g., images, meshes, point clouds, etc.) is piped through complex processing chains in order to extract information that can be used to address high-level inference tasks, such as recognition, detection or segmentation. The extracted information might be in the form of low-level appearance descriptors, e.g., SIFT , or of higher-level nature, e.g., activations at specific layers of deep convolutional networks 
. In recognition problems, for instance, it is then customary to feed the consolidated data to a discriminant classifier such as the popular support vector machine (SVM), a kernel-based learning technique.
While there has been substantial progress on extracting and encoding discriminative information, only recently have people started looking into the topological structure of the data as an additional source of information. With the emergence of topological data analysis (TDA) , computational tools for efficiently identifying topological structure have become readily available. Since then, several authors have demonstrated that TDA can capture characteristics of the data that other methods often fail to provide, c.f. [28, 20].
Along these lines, studying persistent homology  is a particularly popular method for TDA, since it captures the birth and death times of topological features, e.g., connected components, holes, etc., at multiple scales. This information is summarized by the persistence diagram, a multiset of points in the plane. The key feature of persistent homology is its stability: small changes in the input data lead to small changes in the Wasserstein distance of the associated persistence diagrams . Considering the discrete nature of topological information, the existence of such a well-behaved summary is perhaps surprising.
Note that persistence diagrams together with the Wasserstein distance only form a metric space. Thus it is not possible to directly employ persistent homology in the large class of machine learning techniques that require a Hilbert space structure, like SVM or PCA. This obstacle is typically circumvented by defining a kernel function on the domain containing the data, which in turn defines a Hilbert space structure implicitly. While the Wasserstein distance itself does not naturally lead to a valid kernel (see Appendix A), we show that it is possible to define a kernel for persistence diagrams that is stable w.r.t. the 1-Wasserstein distance. This is the main contribution of this paper.
Contribution. We propose a (positive definite) multi-scale kernel for persistence diagrams (see Fig. 1). This kernel is defined via an -valued feature map, based on ideas from scale space theory . We show that our feature map is Lipschitz continuous with respect to the 1-Wasserstein distance, thereby maintaining the stability property of persistent homology. The scale parameter of our kernel controls its robustness to noise and can be tuned to the data. We investigate, in detail, the theoretical properties of the kernel, and demonstrate its applicability on shape classification/retrieval and texture recognition benchmarks.
2 Related work
Methods that leverage topological information for computer vision or medical imaging methods can roughly be grouped into two categories. In the first category, we identify previous work that directly utilizes topological information to address a specific problem, such as topology-guided segmentation. In the second category, we identify approaches that indirectly use topological information. That is, information about topological features is used as input to some machine-learning algorithm.
As a representative of the first category, Skraba et al.  adapt the idea of persistence-based clustering  in a segmentation method for surface meshes of 3D shapes, driven by the topological information in the persistence diagram. Gao et al.  use persistence information to restore so called handles, i.e., topological cycles, in already existing segmentations of the left ventricle, extracted from computed tomography images. In a different segmentation setup, Chen et al.  propose to directly incorporate topological constraints into random-field based segmentation models.
In the second category of approaches, Chung et al.  and Pachauri et al.  investigate the problem of analyzing cortical thickness measurements on 3D surface meshes of the human cortex in order to study developmental and neurological disorders. In contrast to , persistence information is not used directly, but rather as a descriptor that is fed to a discriminant classifier in order to distinguish between normal control patients and patients with Alzheimer’s disease/autism. Yet, the step of training the classifier with topological information is typically done in a rather adhoc manner. In 
for instance, the persistence diagram is first rasterized on a regular grid, then a kernel-density estimate is computed, and eventually the vectorized discrete probability density function is used as a feature vector to train a SVM using standard kernels for. It is however unclear how the resulting kernel-induced distance behaves with respect to existing metrics (e.g., bottleneck or Wasserstein distance) and how properties such as stability are affected. An approach that directly uses well-established distances between persistence diagrams for recognition was recently proposed by Li et al. . Besides bottleneck and Wasserstein distance, the authors employ persistence landscapes  and the corresponding distance in their experiments. Their results expose the complementary nature of persistence information when combined with traditional bag-of-feature approaches. While our empirical study in Sec. 5.2 is inspired by , we primarily focus on the development of the kernel; the combination with other methods is straightforward.
In order to enable the use of persistence information in machine learning setups, Adcock et al.  propose to compare persistence diagrams using a feature vector motivated by algebraic geometry and invariant theory. The features are defined using algebraic functions of the birth and death values in the persistence diagram.
From a conceptual point of view, Bubenik’s concept of persistence landscapes  is probably the closest to ours, being another kind of feature map for persistence diagrams. While persistence landscapes were not explicitly designed for use in machine learning algorithms, we will draw the connection to our work in Sec. 5.1 and show that they in fact admit the definition of a valid positive definite kernel. Moreover, both persistence landscapes as well as our approach represent computationally attractive alternatives to the bottleneck or Wasserstein distance, which both require the solution of a matching problem.
First, we review some fundamental notions and results from persistent homology that will be relevant for our work.
Persistence diagrams are a concise description of the topological changes occuring in a growing sequence of shapes, called filtration. In particular, during the growth of a shape, holes of different dimension (i.e., gaps between components, tunnels, voids, etc.) may appear and disappear. Intuitively, a -dimensional hole, born at time and filled at time , gives rise to a point in the th persistence diagram. A persistence diagram is thus a multiset of points in . Formally, the persistence diagram is defined using a standard concept from algebraic topology called homology; see  for details.
Note that not every hole has to disappear in a filtration. Such holes give rise to essential features and are naturally represented by points of the form in the diagram. Essential features therefore capture the topology of the final shape in the filtration. In the present work, we do not consider these features as part of the persistence diagram. Moreover, all persistence diagrams will be assumed to be finite, as is usually the case for persistence diagrams coming from data.
Filtrations from functions.
A standard way of obtaining a filtration is to consider the sublevel sets of a function defined on some domain , for . It is easy to see that the sublevel sets indeed form a filtration parametrized by . We denote the resulting persistence diagram by ; see Fig. 2 for an illustration.
As an example, consider a grayscale image, where is the rectangular domain of the image and is the grayscale value at any point of the domain (i.e., at a particular pixel). A sublevel set would thus consist of all pixels of with value up to a certain threshold . Another example would be a piecewise linear function on a triangular mesh , such as the popular heat kernel signature . Yet another commonly used filtration arises from point clouds embedded in , by considering the distance function on . The sublevel sets of this function are unions of balls around . Computationally, they are usually replaced by equivalent constructions called alpha shapes.
A crucial aspect of the persistence diagram of a function is its stability with respect to perturbations of . In fact, only stability guarantees that one can infer information about the function from its persistence diagram in the presence of noise.
Formally, we consider as a map of metric spaces and define stability as Lipschitz continuity of this map. This requires choices of metrics both on the set of functions and the set of persistence diagrams. For the functions, the metric is commonly used.
There is a natural metric associated to persistence diagrams, called the bottleneck distance. Loosely speaking, the distance of two diagrams is expressed by minimizing the largest distance of any two corresponding points, over all bijections between the two diagrams. Formally, let and be two persistence diagrams, each augmented by adding each point on the diagonal with countably infinite multiplicity. The bottleneck distance is
where ranges over all bijections from the individual points of to the individual points of . As shown by Cohen-Steiner et al. , persistence diagrams are stable with respect to the bottleneck distance.
The bottleneck distance embeds into a more general class of distances, called Wasserstein distances. For any positive real number , the -Wasserstein distance is
where again ranges over all bijections from the individual elements of to the individual elements of . Note that taking the limit yields the bottleneck distance, and we therefore define . We have the following result bounding the -Wasserstein distance in terms of the distance:
Theorem 1 (Cohen-Steiner et al. ).
Assume that is a compact triangulable metric space such that for every 1-Lipschitz function on and for , the degree total persistence is bounded above by some constant . Let be two -Lipschitz piecewise linear functions on . Then for all ,
We note that, strictly speaking, this is not a stability result in the sense of Lipschitz continuity, since it only establishes Hölder continuity. Moreover, it only gives a constant upper bound for the Wasserstein distance when .
Given a set , a function is a kernel if there exists a Hilbert space , called feature space, and a map , called feature map, such that for all . Equivalently, is a kernel if it is symmetric and positive definite . Kernels allow to apply machine learning algorithms operating on a Hilbert space to be applied to more general settings, such as strings, graphs, or, in our case, persistence diagrams.
A kernel induces a pseudometric on , which is the distance in the feature space. We call the kernel stable w.r.t. a metric on if there is a constant such that for all . Note that this is equivalent to Lipschitz continuity of the feature map.
The stability of a kernel is particularly useful for classification problems: assume that there exists a separating hyperplanefor two classes of data points with margin . If the data points are perturbed by some , then still separates the two classes with a margin .
4 The persistence scale-space kernel
We propose a stable multi-scale kernel for the set of persistence diagrams . This kernel will be defined via a feature map , with denoting the closed half plane above the diagonal.
To motivate the definition of , we point out that the set of persistence diagrams, i.e., multisets of points in , does not possess a Hilbert space structure per se. However, a persistence diagram can be uniquely represented as a sum of Dirac delta distributions111A Dirac delta distribution is a functional that evaluates a given smooth function at a point., one for each point in . Since Dirac deltas are functionals in the Hilbert space [18, Chapter 7], we can embed the set of persistence diagrams into a Hilbert space by adopting this point of view.
Unfortunately, the induced metric on does not take into account the distance of the points to the diagonal, and therefore cannot be robust against perturbations of the diagrams. Motivated by scale-space theory 
, we address this issue by using the sum of Dirac deltas as an initial condition for a heat diffusion problem with a Dirichlet boundary condition on the diagonal. The solution of this partial differential equation is anfunction for any chosen scale parameter . In the following paragraphs, we will
define the persistence scale space kernel ,
derive a simple formula for evaluating , and
prove stability of w.r.t. the -Wasserstein distance.
Let denote the space above the diagonal, and let denote a Dirac delta centered at the point . For a given persistence diagram , we now consider the solution of the partial differential equation222Since the initial condition (6) is not an function, this equation is to be understood in the sense of distributions. For a rigorous treatment of existence and uniqueness of the solution, see [18, Chapter 7].
The feature map at scale of a persistence diagram is now defined as . This map yields the persistence scale space kernel on as
Note that for some implies that on , which means that has to be the empty diagram. From linearity of the solution operator it now follows that is an injective map.
The solution of the partial differential equation can be obtained by extending the domain from to and replacing (6) with
where is mirrored at the diagonal. It can be shown that restricting the solution of this extended problem to yields a solution for the original equation. It is given by convolving the initial condition (8) with a Gaussian kernel:
Using this closed form solution of , we can derive a simple expression for evaluating the kernel explicitly:
We refer to Appendix C for the elementary derivation of (10) and for a visualization (see Appendix B) of the solution (9). Note that the kernel can be computed in time, where and denote the cardinality of the multisets and , respectively.
The kernel is -Wasserstein stable.
To prove -Wasserstein stability of , we show Lipschitz continuity of the feature map as follows:
where and denote persistence diagrams that have been augmented with points on the diagonal. Note that augmenting diagrams with points on the diagonal does not change the values of , as can be seen from (9). Since the unaugmented persistence diagrams are assumed to be finite, some matching between and achieves the infimum in the definition of the Wasserstein distance, . Writing , we have . The Minkowski inequality and the inequality finally yield
We refer to the left-hand side of (11) as the persistence scale space distance between and . Note that the right hand side of (11) decreases as increases. Adjusting accordingly allows to counteract the influence of noise in the input data, which causes an increase in . We will see in Sec. 5.3 that tuning to the data can be beneficial for the overall performance of machine learning methods.
A natural question arising from Theorem 2 is whether our stability result extends to . To answer this question, we first note that our kernel is additive: we call a kernel on persistence diagrams additive if for all . By choosing , we see that if is additive then for all . We further say that a kernel is trivial if for all . The next theorem establishes that Theorem 2 is sharp in the sense that no non-trivial additive kernel can be stable w.r.t. the -Wasserstein distance when .
A non-trivial additive kernel on persistence diagrams is not stable w.r.t. for any .
By the non-triviality of , it can be shown that there exists an such that . We prove the claim by comparing the rates of growth of and w.r.t. . We have
On the other hand,
Hence, can not be bounded by with a constant if . ∎
To evaluate the kernel proposed in Sec. 4, we investigate conceptual differences to persistence landscapes in Sec. 5.1, and then consider its performance in the context of shape classification/retrieval and texture recognition in Sec. 5.2.
5.1 Comparison to persistence landscapes
In , Bubenik introduced persistence landscapes, a representation of persistence diagrams as functions in the Banach space . This construction was mainly intended for statistical computations, enabled by the vector space structure of . For , we can use the Hilbert space structure of to construct a kernel analogously to (7). For the purpose of this work, we refer to this kernel as the persistence landscape kernel and denote by the corresponding feature map. The kernel-induced distance is denoted by . Bubenik shows stability w.r.t. a weighted version of the Wasserstein distance, which for can be summarized as:
Theorem 4 (Bubenik ).
For any two persistence diagrams and we have
where denotes the persistence of , and ranges over all bijections from to .
For the first experiment, let and be two diagrams with one point each and . The two points move away from the diagonal with increasing , while maintaining the same Euclidean distance to each other. Consequently, and asymptotically approach a constant as . In contrast, grows in the order of and, in particular, is unbounded. This means that emphasizes points of high persistence in the diagrams, as reflected by the weighting term in (12).
In the second experiment, we compare persistence diagrams from data samples of two fictive classes A (i.e., ,) and B (i.e., ), illustrated in Fig. 3. We first consider . As we have seen in the previous experiment, will be dominated by variations in the points of high persistence. Similarly, will also be dominated by these points as long as is sufficiently large. Hence, instances of classes A and B would be inseparable in a nearest neighbor setup. In contrast, , and do not over-emphasize points of high persistence and thus allow to distinguish classes A and B.
5.2 Empirical results
We report results on two vision tasks where persistent homology has already been shown to provide valuable discriminative information : shape classification/retrieval and texture image classification. The purpose of the experiments is not to outperform the state-of-the-art on these problems – which would be rather challenging by exclusively using topological information – but to demonstrate the advantages of and over and .
For shape classification/retrieval, we use the SHREC 2014  benchmark, see Fig. 4. It consists of both synthetic and real shapes, given as 3D meshes. The synthetic part of the data contains meshes of humans (five males, five females, five children) in different poses; the real part contains meshes from humans (male, female) in different poses. We use the meshes in full resolution, i.e., without any mesh decimation. For classification, the objective is to distinguish between the different human models, i.e., a 15-class problem for SHREC 2014 (synthetic) and a 40-class problem for SHREC 2014 (real).
For texture recognition, we use the Outex_TC_00000 benchmark , downsampled to pixel images. The benchmark provides 100 predefined training/testing splits and each of the 24 classes is equally represented by 10 images during training and testing.
For shape classification/retrieval, we compute the classic Heat Kernel Signature (HKS)  over a range of ten time parameters of increasing value. For each specific choice of , we obtain a piecewise linear function on the surface mesh of each object. As discussed in Sec. 3, we then compute the persistence diagrams of the induced filtrations in dimensions and .
For texture classification, we compute CLBP  descriptors, (c.f. ). Results are reported for the rotation-invariant versions of the CLBP-Single (CLBP-S) and the CLBP-Magnitude (CLBP-M) operator with neighbours and radius . Both operators produce a scalar-valued response image which can be interpreted as a weighted cubical cell complex and its lower star filtration is used to compute persistence diagrams; see  for details.
For both types of input data, the persistence diagrams are obtained using Dipha , which can directly handle meshes and images. A standard soft margin -SVM classifier , as implemented in Libsvm , is used for classification. The cost factor is tuned using ten-fold cross-validation on the training data. For the kernel , this cross-validation further includes the kernel scale .
5.2.1 Shape classification
Tables 2 and 2 list the classification results for and on SHREC 2014. All results are averaged over ten cross-validation runs using random 70/30 training/testing splits with a roughly equal class distribution. We report results for -dimensional features only; -dimensional features lead to comparable performance.
On both real and synthetic data, we observe that leads to consistent improvements over . For some choices of , the gains even range up to , while in other cases, the improvements are relatively small. This can be explained by the fact that varying the HKS time essentially varies the smoothness of the input data. The scale in allows to compensate—at the classification stage—for unfavorable smoothness settings to a certain extent, see Sec. 4. In contrast, does not have this capability and essentially relies on suitably preprocessed input data. For some choices of , does in fact lead to classification accuracies close to . However, when using , we have to carefully adjust the HKS time parameter, corresponding to changes in the input data. This is undesirable in most situations, since HKS computation for meshes with a large number of vertices can be quite time-consuming and sometimes we might not even have access to the meshes directly. The improved classification rates for
indicate that using the additional degree of freedom is in fact beneficial for performance.
5.2.2 Shape retrieval
In addition to the classification experiments, we report on shape retrieval performance using standard evaluation measures (see [27, 24]). This allows us to assess the behavior of the kernel-induced distances and .
For brevity, only the nearest-neighbor performance is listed in Table 3 (for a listing of all measures, see Appendix D). Using each shape as a query shape once, nearest-neighbor performance measures how often the top-ranked shape in the retrieval result belongs to the same class as the query. To study the effect of tuning the scale , the column lists the maximum nearest-neighbor performance that can be achieved over a range of scales.
As we can see, the results are similar to the classification experiment. However, at a few specific settings of the HKS time , performs on par, or better than . As noted in Sec. 5.2.1, this can be explained by the changes in the smoothness of the input data, induced by different HKS times . Another observation is that nearest-neighbor performance of is quite unstable around the top result with respect to . For example, it drops at from 91% to 53.3% and 76.7% on SHREC 2014 (synthetic) and at from 70% to 45.2% and 43.5% on SHREC 2014 (real). In contrast, exhibits stable performance around the optimal .
To put these results into context with existing works in shape retrieval, Table 3 also lists the top three entries (out of 22) of  on the same benchmark. On both real and synthetic data, ranks among the top five entries. This indicates that topological persistence alone is a rich source of discriminative information for this particular problem. In addition, since we only assess one HKS time parameter at a time, performance could potentially be improved by more elaborate fusion strategies.
|Top- ||– –||– –|
5.3 Texture recognition
For texture recognition, all results are averaged over the training/testing splits of the Outex_TC_00000 benchmark. Table 4 lists the performance of a SVM classifier using and for -dimensional features (i.e., connected components). Higher-dimensional features were not informative for this problem. For comparison, Table 4 also lists the performance of a SVM, trained on normalized histograms of CLBP-S/M responses, using a kernel.
First, from Table 4, it is evident that performs better than by a large margin, with gains up to 11% in accuracy. Second, it is also apparent that, for this problem, topological information alone is not competitive with SVMs using simple orderless operator response histograms. However, the results of  show that a combination of persistence information (using persistence landscapes) with conventional bag-of-feature representations leads to state-of-the-art performance. While this indicates the complementary nature of topological features, it also suggests that kernel combinations (e.g., via multiple-kernel learning ) could lead to even greater gains by including the proposed kernel .
To assess the stability of the (customary) cross-validation strategy to select a specific , Fig. 5 illustrates classification performance as a function of the latter. Given the smoothness of the performance curve, it seems unlikely that parameter selection via cross-validation will be sensitive to a specific discretization of the search range .
Finally, we remark that tuning has the same drawbacks in this case as in the shape classification experiments. While, in principle, we could smooth the textures, the CLBP response images, or even tweak the radius of the CLBP operators, all those strategies would require changes at the beginning of the processing pipeline. In contrast, adjusting the scale in is done at the end of the pipeline during classifier training.
We have shown, both theoretically and empirically, that the proposed kernel exhibits good behavior for tasks like shape classification or texture recognition using a SVM. Moreover, the ability to tune a scale parameter has proven beneficial in practice.
One possible direction for future work would be to address computational bottlenecks in order to enable application in large scale scenarios. This could include leveraging additivity and stability in order to approximate the value of the kernel within given error bounds, in particular, by reducing the number of distinct points in the summation of (10).
While the 1-Wasserstein distance is well established and has proven useful in applications, we hope to improve the understanding of stability for persistence diagrams w.r.t. the Wasserstein distance beyond the previous estimates. Such a result would extend the stability of our kernel from persistence diagrams to the underlying data, leading to a full stability proof for topological machine learning.
In summary, our method enables the use of topological information in all kernel-based machine learning methods. It will therefore be interesting to see which other application areas will profit from topological machine learning.
-  A. Adcock, E. Carlsson, and G. Carlsson. The Ring of Algebraic Functions on Persistence Bar Codes. arXiv, available at http://arxiv.org/abs/1304.0530, 2013.
-  R. Bapat and T. Raghavan. Nonnegative Matrices and Applications. Cambridge University Press, 1997.
-  U. Bauer, M. Kerber, and J. Reininghaus. Distributed computation of persistent homology. In ALENEX, 2014.
-  C. Berg, J.-P. Reus-Christensen, and P. Ressel. Harmonic Analysis on Semi-Groups – Theory of Positive Definite and Related Functions. Springer, 1984.
-  P. Bubenik. Statistical topological data analysis using persistence landscapes. arXiv, available at http://arxiv.org/abs/1207.6437, 2012.
-  G. Carlsson. Topology and data. Bull. Amer. Math. Soc., 46:255–308, 2009.
-  C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM TIST, 2(3):1–27, 2011.
-  F. Chazal, L. Guibas, S. Oudot, and P. Skraba. Persistence-based clustering in Riemannian manifolds. In SoSG, 2011.
-  C. Chen, D. Freedman, and C. Lampert. Enforcing topological constraints in random field image segmentation. In CVPR, 2013.
-  M. Chung, P. Bubenik, and P. Kim. Persistence diagrams of cortical surface data. In IPMI, 2009.
-  D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete Comp. Geom., 37(1):103–120, 2007.
-  D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and Y. Mileyko. Lipschitz functions have -stable persistence. Found. Comput. Math., 10(2):127–139, 2010.
-  H. Edelsbrunner and J. Harer. Computational Topology. An Introduction. AMS, 2010.
-  M. Gao, C. Chen, S. Zhang, Z. Qian, D. Metaxas, and L. Axel. Segmenting the papillary muscles and the trabeculae from high resolution cardiac CT through restoration of topological handles. In IPMI, 2013.
-  M. Gönen and E. Alpaydin. Multiple kernel learning algorithms. J. Mach. Learn. Res., 12:2211–2268, 2011.
-  Z. Guo, L. Zhang, and D. Zhang. A completed modeling of local binary pattern operator for texture classification. IEEE TIP, 19(6):1657–1663, 2010.
-  T. Iijima. Basic theory on normalization of a pattern (in case of typical one-dimensional pattern). Bulletin of Electrical Laboratory, 26:368–388, 1962.
-  R. J. j. Iorio and V. de Magalhães Iorio. Fourier analysis and partial differential equations. Cambridge Stud. Adv. Math., 2001.
-  A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012.
-  C. Li, M. Ovsjanikov, and F. Chazal. Persistence-based structural recognition. In CVPR, 2014.
-  D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91–110, 2004.
-  T. Ojala, T. Mäenpää, M. Pietikäinen, J. Viertola, J. Kyllonen, and S. Huovinen. OuTeX – new framework for empirical evaluation of texture analysis algorithms. In ICPR, 2002.
-  D. Pachauri, C. Hinrichs, M. Chung, S. Johnson, and V. Singh. Topology-based kernels with application to inference problems in Alzheimer’s disease. IEEE TMI, 30(10):1760–1770, 2011.
-  Pickup, D. et al.. SHREC ’14 track: Shape retrieval of non-rigid 3d human models. In Proceedings of the 7th Eurographics workshop on 3D Object Retrieval, EG 3DOR’14. Eurographics Association, 2014.
-  B. Schölkopf. The kernel-trick for distances. In NIPS, 2001.
-  B. Schölkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA, 2001.
-  P. Shilane, P. Min, M. Kazhdan, and T. Funkhouser. The Princeton shape benchmark. In Shape Modeling International, 2004.
-  P. Skraba, M. Ovsjanikov, F. Chazal, and L. Guibas. Persistence-based segmentation of deformable shapes. In CVPR Workshop on Non-Rigid Shape Analysis and Deformable Image Alignment, 2010.
-  J. Sun, M. Ovsjanikov, and L. Guibas. A concise and probably informative multi-scale signature based on heat diffusion. In SGP, 2009.
-  H. Wagner, C. Chen, and E. Vuçini. Efficient computation of persistent homology for cubical data. In Topological Methods in Data Analysis and Visualization II, Mathematics and Visualization, pages 91–106. Springer Berlin Heidelberg, 2012.
Appendix A Indefiniteness of
It is tempting to try to employ the Wasserstein distance for constructing a kernel on persistence diagrams. For instance, in Euclidean space, is conditionally positive definite and can be used within SVMs. Hence, the question arises if can be used as well.
In the following, we demonstrate (via counterexamples) that neither nor – for different choices of – are (conditionally) positive definite. Thus, they cannot be employed in kernel-based learning techniques.
First, we briefly repeat some definitions to establish the terminology; this is done to avoid potential confusion, w.r.t. references [4, 2, 26]), about what is referred to as (conditionally) positive/negative definiteness in the context of kernel functions.
A symmetric matrix is called positive definite (p.d.) if for all . A symmetric matrix is called negative definite (n.d.) if for all .
Note that in literature on linear algebra the notion of definiteness as introduced above is typically known as semidefiniteness. For the sake of brevity, in the kernel literature the prefix “semi” is typically dropped.
A symmetric matrix is called conditionally positive definite (c.p.d.) if for all s.t. . A symmetric matrix is called conditionally negative definite (c.n.d.) if for all s.t. .
Given a set , a function is a positive definite kernel if there exists a Hilbert space and a map such that .
Typically a positive definite kernel is simply called kernel. Roughly speaking, the utility of p.d. kernels comes from the fact that they enable the “kernel-trick”, i.e., the use of algorithms that can be formulated in terms of dot products in an implicit feature space . However, as shown by Schölkopf in , this “kernel-trick” also works for distances, leading to the larger class of c.p.d. kernels (see Definition 5), which can be used in kernel-based algorithms that are translation-invariant (e.g., SVMs or kernel PCA).
A function is (conditionally) positive (negative, resp.) definite kernel if and only if is symmetric and for every finite subset the Gram matrix is (conditionally) positive (negative, resp.) definite.
To demonstrate that a function is not c.p.d. or c.n.d., resp., we can look at the eigenvalues of the corresponding Gram matrices. In fact, it is known that a matrixis p.d. if and only if all its eigenvalues are nonnegative. The following lemmas from  give similar, but weaker results for (nonnegative) c.n.d. matrices, which will be useful to us.
Lemma 5 (see Lemma 4.1.4 of ).
If is a c.n.d. matrix, then has at most one positive eigenvalue.
Corollary 1 (see Corollary 4.1.5 of ).
Let be a nonnegative, nonzero matrix that is c.n.d. Then has exactly one positive eigenvalue.
The following theorem establishes a relation between c.n.d. and p.d. kernels.
Theorem 6 (see Chapter 2, §2, Theorem 2.2 of ).
Let be a nonempty set and let be symmetric. Then is a conditionally negative definite kernel if and only if is a positive definite kernel for all .
In the code (test_negative_type_simple.m)333https://gist.github.com/rkwitt/4c1e235d702718a492d3; the file options_cvpr15.mat can be found at: http://www.rkwitt.org/media/files/options_cvpr15.mat, we generate simple examples for which the Gram matrix – for various choices of – has at least two positive and two negative eigenvalue. Thus, it is neither (c.)n.d. nor (c.)p.d. according to Corollary 1. Consequently, the function is not p.d. either, by virtue of Theorem 6. To run the Matlab code, simply execute:
This will generate a short summary of the eigenvalue computations for a selection of values for , including (bottleneck distance).
Remark. While our simple counterexamples suggest that typical kernel constructions using for different (including ) do not lead to (c.)p.d. kernels, a formal assessment of this question remains an open research question.
Appendix B Plots of the feature map
Given a persistence diagram , we consider the solution of the following partial differential equation
To solve the partial differential equation, we extend the domain from to and consider for each a Dirac delta and a Dirac delta , as illustrated in Fig. 6 (left). By convolving with a Gaussian kernel, see Fig. 6 (right), we obtain a solution for the following partial differential equation:
Restricting the solution to , we then obtain the following solution ,
for the original partial differential equation and . This yields the feature map :
In Fig. 7, we illustrate the effect of an increasing scale on the feature map . Note that in the right plot the influence of the low-persistence point close to the diagonal basically vanishes. This effect is essentially due to the Dirichlet boundary condition and is responsible for gaining stability for our persistence scale-space kernel .
Appendix C Closed-form solution for
For two persistence diagrams and , the persistence scale-space kernel is defined as , which is
By extending its domain from to , we see that for all . Hence, for all , and we obtain
We calculate the integrals as follows:
In the first step, we applied a coordinate transform that moves to . In the second step, we performed a rotation such that lands on the positive -axis at distance to the origin and we applied Fubini’s theorem. We finally obtain the closed-form expression for the kernel as:
Appendix D Additional retrieval results on SHREC 2014