# On the Metric Distortion of Embedding Persistence Diagrams into separable Hilbert spaces

Persistence diagrams are important descriptors in Topological Data Analysis. Due to the nonlinearity of the space of persistence diagrams equipped with their diagram distances, most of the recent attempts at using persistence diagrams in machine learning have been done through kernel methods, i.e., embeddings of persistence diagrams into Reproducing Kernel Hilbert Spaces, in which all computations can be performed easily. Since persistence diagrams enjoy theoretical stability guarantees for the diagram distances, the metric properties of the feature map, i.e., the relationship between the Hilbert distance and the diagram distances, are of central interest for understanding if the persistence diagram guarantees carry over to the embedding. In this article, we study the possibility of embedding persistence diagrams into separable Hilbert spaces, with bi-Lipschitz maps. In particular, we show that for several stable embeddings into infinite-dimensional Hilbert spaces defined in the literature, any lower bound must depend on the cardinalities of the persistence diagrams, and that when the Hilbert space is finite dimensional, finding a bi-Lipschitz embedding is impossible, even when restricting the persistence diagrams to have bounded cardinalities.

## Authors

• 15 publications
• 15 publications
• ### On the Metric Distortion of Embedding Persistence Diagrams into Reproducing Kernel Hilbert Spaces

Persistence diagrams are important feature descriptors in Topological Da...
06/19/2018 ∙ by Mathieu Carrière, et al. ∙ 0

• ### Embeddings of Persistence Diagrams into Hilbert Spaces

Since persistence diagrams do not admit an inner product structure, a ma...
05/11/2019 ∙ by Peter Bubenik, et al. ∙ 0

• ### A Domain-Oblivious Approach for Learning Concise Representations of Filtered Topological Spaces for Clustering

Persistence diagrams have been widely used to quantify the underlying fe...
05/25/2021 ∙ by Yu Qin, et al. ∙ 2

• ### Geometry and clustering with metrics derived from separable Bregman divergences

Separable Bregman divergences induce Riemannian metric spaces that are i...
10/25/2018 ∙ by Erika Gomes-Gonçalves, et al. ∙ 6

• ### Edit Distance and Persistence Diagrams Over Lattices

We build a functorial pipeline for persistent homology. The input to thi...
10/14/2020 ∙ by Alexander McCleary, et al. ∙ 0

• ### On the choice of weight functions for linear representations of persistence diagrams

Persistence diagrams are efficient descriptors of the topology of a poin...
07/10/2018 ∙ by Divol Vincent, et al. ∙ 0

• ### Learning Hyperbolic Representations of Topological Features

Learning task-specific representations of persistence diagrams is an imp...
03/16/2021 ∙ by Panagiotis Kyriakis, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The increase of available data in both academia and industry have been exponential over the past few decades, making data analysis ubiquitous in many different fields of science. Machine learning has proved to be one of the most prominent field of data science, leading to astounding results in various applications, such as image and signal processing. Topological Data Analysis (TDA)

[CAR09] is one specific field of machine learning, which focuses more on complex rather than big data. The general assumption of TDA is that data is actually sampled from geometric or low-dimensional domains, whose geometric features are relevant to the analysis. These geometric features are usually encoded in a mathematical object called persistence diagram, which is roughly a set of points in the plane, each point representing a topological feature whose size is contained in the coordinates of the point. Persistence diagrams have been proved to bring complementary information to other traditional descriptors in many different applications, often leading to large result improvements. This is also due to the so-called stability properties of the persistence diagrams, which state that persistence diagrams computed on similar data are also very close in the diagram distances [CEH07, BL15, CdG+16].

Unfortunately, the use of persistence diagrams in machine learning methods is not straightforward, since many algorithms expect data to be Euclidean vectors, while persistence diagrams are sets of points with possibly different cardinalities. Moreover, the

diagram distances used to compare persistence diagrams are computed with optimal matchings, and thus quite different from Euclidean metrics. The usual way to cope with such difficult data is to use kernel methods. A kernel is a symmetric function on the data whose evaluation on a pair of data points equals the scalar product of the images of these points under a feature map into a Hilbert space, called the Reproducing Kernel Hilbert Space of the kernel. Many algorithms can be kernelized, such as PCA and SVM, allowing one to handle non-Euclidean data as soon as a kernel or a feature map is available.

Hence, the question of defining a feature map into a Hilbert space has been intensively studied in the past few years, and, as of today, various methods can be implemented, either into finite or infinite dimensional Hilbert spaces [BUB15, COO15, RHB+15, KFH16, AEK+17, CCO17, HKN+17]. Since persistence diagrams are known to enjoy stability properties, it is also natural to ask the same guarantee for their embeddings. Hence, all feature maps defined in the literature satisfy a stability property stating that the Hilbert distance between the image of the persistence diagrams is upper bounded by the diagram distances. A more difficult question is to prove whether a lower bound also holds or not. Even though one attempt has already been made to show such a lower bound for the so-called Sliced Wasserstein distance in [CCO17], the question remains open in general.

#### Contributions.

In this article, we tackle the general question of defining bi-Lipschitz embeddings of persistence diagrams into separable Hilbert spaces. More precisely, we show that:

• For several stable feature maps defined in the literature, if such a bi-Lipschitz embedding exists, then the lower bound goes to 0 or the upper bound goes to as the number of points and their coordinates increase in the persistence diagrams (Theorem 3.5 and Proposition 3.9).

• Such a bi-Lipschitz embedding does not exist if the Hilbert space is finite dimensional (Theorem 4.4),

Finally, we also provide experimental evidence of this behavior by computing the metric distortions of various feature maps for persistence diagrams with increasing cardinalities.

#### Related work.

Feature maps for persistence diagrams can be classified into two different classes, depending whether the corresponding Hilbert space is finite or infinite dimensional.

In the infinite dimensional case, the first attempt was that proposed in [BUB15], in which persistence diagrams are turned into functions, called Landscapes, by computing the homological rank functions given by the persistence diagram points. Another common way to define a feature map is to see the points of the persistence diagrams as centers of Gaussians with a fixed bandwidth, weighted by the distance of the point to the diagonal. This is the approach originally advocated in [RHB+15], and later generalized in [KFH18], leading to the so-called Persistence Scale Space and Persistence Weighted Gaussian feature maps. Another possibility is to define a Gaussian-like feature map by using the Sliced Wasserstein distance between persistence diagrams, which is conditionnally negative definite. This implicit feature map, called the Sliced Wasserstein map, was defined in [CCO17].

In the finite dimensional case, many different possibilities are available. One may consider evaluating a family of tropical polynomials onto the persistence diagram [KAL18], taking the sorted vector of the pairwise distances between the persistence diagram points [COO15], or computing the coefficients of a complex polynomial whose roots are given by the persistence diagram points [DF15]. Another line of work was proposed in [AEK+17]

by discretizing the Persistence Scale Space feature map. The idea is to discretize the plane into a fixed grid, and then compute a value for each pixel by integrating Gaussian functions centered on the persistence diagram points. Finally, persistence diagrams have been incorporated in deep learning frameworks in

[HKN+17]

, in which Gaussian functions (whose means and variances are optimized by the neural network during training) are integrated against persistence diagrams seen as discrete measures.

## 2 Background

### 2.1 Persistence Diagrams

Persistent homology is a technique of TDA coming from topological algebra that allows the user to compute and encode topological information of datasets in a compact descriptor called the persistence diagram. Given a dataset , often given in the form of a point cloud in , and a continuous and real-valued function , the persistence diagram of can be computed under mild conditions (the function has to be tame, see [CdG+16] for more details), and consists in a finite set of points with multiplicities in the upper-diagonal half-plane . This set of points is computed from the family of sublevel sets of , that is the sets of the form , for some . More precisely, persistence diagrams encode the different topological events that occur as increases from to . Such topological events include creation and merging of connected components and cycles in every dimension; see Figure 1. Intuitively, persistent homology records, for each topological feature that appears in the family of sublevel sets, the value at which the feature appears, called the birth value, and the value at which it gets merged or filled in, called the death value. These values are then used as coordinates for a corresponding point in the persistence diagram. Note that several features may have the same birth and death values, so points in the persistence diagram have multiplicities. Moreover, since , these points are always located above the diagonal . A general intuition about persistence diagrams is that the distance of a point to is a direct measure of its relevance: if a point is close to , it means that the corresponding cycle got filled in right after its appearance, thus suggesting that it is likely due to noise in the dataset. On the contrary, points that are far away from represent cycles with a significant life span, and are more likely to be relevant for the analysis. We refer the interested reader to [EH10, OUD15] for more details about persistent homology.

#### Notation.

Let be the space of persistence diagrams with countable number of points. More formally, can be equivalently defined as a functional space , where each point is a point in the corresponding persistence diagram with multiplicity . Let be the space of persistence diagrams with less than points, i.e., . Let be the space of persistence diagrams included in , i.e., . Finally, let be the space of persistence diagrams with less than points included in , i.e., . Obviously, we have the following sequences of (strict) inclusions: , and .

#### Diagram distances.

Persistence diagrams can be efficiently compared using the diagram distances, which is a family of distances parametrized by an integer that rely on the computation of partial matchings. Recall that two persistence diagrams and may have different number of points. A partial matching between and is a subset of . It comes along with (resp. ), which is the set of points of (resp. ) that are not matched to a point of (resp. ) by . The -cost of is given as:

 cp(Γ)=∑(p,q)∈Γ∥p−q∥p∞+∑p∈Γ1∥p−Δ∥p∞+∑q∈Γ2∥q−Δ∥p∞.

The -diagram distance is then defined as the cost of the best partial matching:

###### Definition 2.1.

Given two persistence diagrams and , the -diagram distance is defined as:

 dp(Dg1,Dg2)=infΓ p√cp(Γ).

Note that in the literature, these distances are often called the Wasserstein distances between persistence diagrams. Here, we follow the denomination of [CCO17]. In particular, taking a maximum instead of a sum in the definition of the cost,

 c∞(Γ)=max(p,q)∈Γ∥p−q∥∞+maxp∈Γ1∥p−Δ∥∞+maxq∈Γ2∥q−Δ∥∞.

allows to add one more distance in the family, the bottleneck distance .

#### Stability.

A useful property of persistence diagrams is stability. Indeed, it is well known in the literature that persistence diagrams computed from close functions are close themselves in the bottleneck distance:

###### Theorem 2.2 ([Ceh07, CdG+16]).

Given two tame functions , one has the following inequality:

 d∞(Dg(f),Dg(g))≤∥f−g∥∞. (1)

In other words, the map is 1-Lipschitz. Note that stability results exist as well for the other diagram distances, but these results are weaker than the above Lipschitz condition, and they require more conditions—see [OUD15].

### 2.2 Bi-Lipschitz embeddings.

The main question that we adress in this article is the one of preserving the persistence diagram metric properties when using embeddings into Hilbert spaces. For instance, one may ask the images of persistence diagrams under a feature map into a Hilbert space to be stable as well. A natural question is then whether a lower bound also holds, i.e., whether the feature map is a bi-Lipschitz embedding between and .

###### Definition 2.3.

Let and be two metric spaces. A bi-Lipschitz embedding between and is a map such that there exist constants such that:

for any . The metrics and are called strongly equivalent, and the constants and are called the lower and upper metric distortion bounds respectively. If , is called an isometric embedding.

Note that this definition is equivalent to the commonly used definition that additionally requires .

###### Remark 2.4.

Finding an isometric embedding of persistence diagrams into a Hilbert space is impossible since geodesics are unique in a Hilbert space while this is not the case for persistence diagrams, as shown in the proof of Proposition 2.4 in [TMM+14].

###### Remark 2.5.

For feature maps that are bounded, i.e., those maps such that there exists a constant for which for all , it is obviously impossible to find a bi-Lipschitz embedding. This involves for instance the Sliced Wasserstein (SW) feature map [CCO17], which is defined implicitly from a Gaussian-like function. However, note that if the SW feature map is restricted to a set of persistence diagrams which are close to each other with respect to the SW distance, then the distance in the Hilbert space corresponding to the SW feature map is actually equivalent to the square root of the SW distance. Hence, we added the square root of the SW distance in our experiment in Section 5.

## 3 Mapping into separable Hilbert spaces

In our first main result, we use separability to determine whether a bi-Lipschitz embedding can exist between the space of persistence diagrams and a Hilbert space.

###### Definition 3.1.

A metric space is called separable if it has a dense countable subset.

For instance, the following three Hilbert spaces (equipped with their canonical metrics) are separable: , and , where is separable. The two following results describe well-known properties of separable spaces.

###### Proposition 3.2.

Any subspace of a separable metric space is separable as well.

###### Proposition 3.3.

Let and be two metric spaces, and assume there is a bi-Lipschitz embedding , with Lipschitz constants and . Then is separable if and only if is separable.

The following lemma shows that for a feature map which is bi-Lipschitz when restricted to , the limits of the corresponding constants can actually be used to study the general metric distortion in .

###### Lemma 3.4.

Let and let be a metric on persistence diagrams such that is continuous with respect to on . Let

 RLN ={dp(Dg,Dg′)d(Dg,Dg′):Dg≠Dg′∈DLN}, ALN =inf RLNandBLN=sup RLN.

Since is nonincreasing and is nonincreasing with respect to and , we define:

 AN =liminfL→∞ALN,  AL=liminfN→∞ALN,  A=liminfN,L→∞ALN. BN =limsupL→∞BLN,  BL=limsupN→∞BLN,  B=limsupN,L→∞BLN.

We define , , similarly, since is nondecreasing with respect to and . Then the following inequalities hold:

 ALd(Dg,Dg′)≤ dp(Dg,Dg′)≤BLd(Dg,Dg′) for all Dg,Dg′ ∈DL, ANd(Dg,Dg′)≤ dp(Dg,Dg′)≤BNd(Dg,Dg′) for all Dg,Dg′ ∈DN, Ad(Dg,Dg′)≤ dp(Dg,Dg′)≤Bd(Dg,Dg′) for all Dg,Dg′ ∈D.

Note that , , , , and may be equal to or , so it does not necessarily hold that and are strongly equivalent on , or .

###### Proof.

We only prove the last inequality, since the proof extends verbatim to the other two. Pick any two persistence diagrams . Let be an optimal partial matching achieving , where (resp. ) is either in (resp. ) or in (resp. ). Given , we define two sequences of persistence diagrams and recursively with and:

 Dgn+1 ={Dgnif pn+1∈πΔ(Dg′),Dgn∪{pn+1}otherwise, Dg′n+1 ={Dg′nif qn+1∈πΔ(Dg),Dg′n∪{qn+1}otherwise.

Let us define

 ln =max{max{∥p∥∞:p∈Dgn},max{∥q∥∞:q∈Dg′n}}, sn =max{card(Dgn),card(Dg′n)},

Note that both and are nondecreasing. We have and thus:

 Alnsnd(Dgn,Dg′n)≤dp(Dgn,Dg′n)≤Blnsnd(Dgn,Dg′n). (2)

Assuming 333Note that this is always true if . Even though this is not clear if this assumption also holds in the general case, it is satisfied for the spaces of persistence diagrams defined in our subsequent results Lemma 3.6 and Proposition 3.9. , it follows that by continuity of . We finally obtain the desired inequality by letting in (2). ∎

A corollary of the previous results is that even if a feature map taking values in a separable Hilbert space might be bi-Lipschitz when restricted to , the corresponding bounds have to go to 0 or as soon as the domain of the feature map is not separable.

###### Theorem 3.5.

Let be a feature map defined on a non-separable subspace of persistence diagrams containing every , i.e., for each . Assume takes values in a separable Hilbert space , and that is bi-Lipschitz on each with constants . Then either or when .

Many feature maps defined in the literature, such as the Persistence Weighted Gaussian feature map [KFH18] or the Landscape feature map [BUB15], actually take value in the separable function space , where is the upper half-plane . Hence, to illustrate how Theorem 3.5 applies to these feature maps, we now provide two lemmata. In the first one, we define a set which is not separable with respect to , and in the second one, we show that is actually included in the domain of these feature maps.

###### Lemma 3.6.

Consider the sequence of points , and define the set , where is the set of sequences with values in , with: . Then is not separable.

###### Proof.

First note that since the sequences can have infinite support, the spaces and are not countable.

Let be the equivalence relation on defined with:

 Dgu∼Dgv⟺supp(u) △ supp(v)<+∞,

where denotes the symmetric difference of sets. Since the set of sequences with finite support is countable, it follows that each equivalence class is countable as well. In particular, this means that the set of equivalence classes is uncountable, since otherwise would be countable as a countable union of countable equivalence classes.

We now prove the result by contradiction. Assume that is separable, and let be the corresponding dense countable subset of . Let . Then for each , there is at least one sequence such that and . We now claim that every such satisfies . Indeed, assume and let . Then, since , we would have

 d1(Dgu,Dgu′)=∑i∈I1i=+∞>ϵ,

which is not possible. Hence, this means that . However, we showed that is uncountable, meaning that is uncountable as well, which leads to a contradiction since is countable by assumption. ∎

We now show that the Persistence Weighted Gaussian and the Landscape feature maps are well-defined on the set . Let us first formally define these feature maps.

###### Definition 3.7.

Given , , let be the triangular function defined with if and 0 otherwise. Then, given a persistence diagram , let , where kmax denotes the -th largest element. The Landscape feature map is defined as:

 ΦL:Dg↦¯λ,where¯λ(x,y)={λ⌈x⌉(y)x≥0,0otherwise.
###### Definition 3.8.

Let be a weight function and . The Persistence Weighted Gaussian feature map is defined as:

 ΦωPWG:Dg↦∑p∈Dgω(p)e−∥⋅−p∥222σ2.
###### Proposition 3.9.

Let be the weight function . Let be the set of persistence diagrams defined in Lemma 3.6. Then:

 S⊂DΦωPWG and S⊂DΦL.
###### Proof.

Let be the sequence defined with if and otherwise. To show the desired result, it suffices to show that and are Cauchy sequences in . Let , and let us study for each feature map.

• Case . We have the following inequalities:

 ∥ΦωPWG (Dguq)−ΦωPWG(Dgup)∥2L2(R2) =∫R2⎛⎝q∑k=p1k2e−∥x−pk∥222σ2⎞⎠2dx=q∑k=pq∑l=p1k2l2∫R2e−∥x−pk∥22+∥x−pl∥222σ2dx =πσ2q∑k=pq∑l=p1k2l2e−∥pk−pl∥224σ2 (cf Appendix C in~% {}\@@cite[cite]{[\@@bibref{}{Reininghaus14}{}{}]} for a proof of this equality) ≤πσ2(q∑k=p1k2)(q∑l=p1l2)

The result simply follows from the fact that is convergent and Cauchy.

• Case . Since all triangular functions, as defined in Definition 3.7, have disjoint support, it follows that the only non-zero lambda function is , where is a triangular function defined with if and 0 otherwise. See Figure 2.

Hence, we have the following inequalities:

 ∥ΦL (Dguq)−ΦL(Dgup)∥2L2(R2) =∫R(q∑k=pϕk(x))2dx=q∑k=pq∑l=p∫Rϕk(x)ϕl(x)dx =q∑k=p∫Rϕk(x)2dx≤q∑k=p∫Rϕk(x)dx=q∑k=p14k2

Again, the result follows from the fact that is convergent and Cauchy. ∎

Proposition 3.9 shows that Theorem 3.5 applies (with the metric between persistence diagrams) to the Persistence Weighted Gaussian feature map with weight function —actually, any weight function that is equivalent to when goes to 0—and the Landscape feature map. In particular, any lower bound for these maps has to go to 0 when since an upper bound exists for these maps due to their stability properties—see Corollary 15 in [BUB15] and Proposition 3.4 in [KFH18].

## 4 Mapping into finite-dimensional Hilbert spaces

In our second main result, we show that more can be said about feature maps into (equipped with the Euclidean metric), using the so-called Assouad dimension. This involves all vectorization methods for persistence diagrams that we described in the related work.

The following definition and example are taken from paragraph 10.13 of [HEI01].

###### Definition 4.1.

Let be a metric space. Given a subset and , let be the least number of open balls of radius less than or equal to that can cover . The Assouad dimension of is:

 dimA(X,dX)=inf{α>0:∃C>0 s.t.% supx∈XNβr(B(x,r))≤Cβ−α, ∀r>0,β∈(0,1]}.

Intuitively, the Assouad dimension measures the number of open balls needed to cover an open ball of larger radius. For example, the Assouad dimension of is . Moreover, the Assouad dimension is preserved by bi-Lipschitz embeddings.

###### Proposition 4.2 (Lemma 9.6 in [Rob10]).

Let and be metric spaces with a bi-Lipschitz embedding . Then .

#### Non-embeddability.

We now show that cannot be embedded into with bi-Lipschitz embeddings. The proof of this fact is a consequence of the following lemma:

###### Lemma 4.3.

Let , , and . Then .

###### Proof.

Let denote an open ball with . We want to show that, for any and , it is possible to find a persistence diagram , a radius and a factor such that the number of open balls of radius at most needed to cover is strictly larger than . To this end, we pick arbitrary and . The idea of the proof is to define as the empty diagram, and to derive a lower bound on the number of balls with radius needed to cover by considering persistence diagrams with one point evenly distributed on the line such that the distance between two consecutive points is in the -distance. Indeed, the pairwise distance between any two such persistence diagrams is sufficiently large so that they must belong to different balls. Then we can control the number of persistence diagrams, and thus the number of balls, by taking sufficiently small.

More formally, let . We want to show that we have at least balls in the cover, meaning that . Let and . We define a cover of with open balls of radius less than centered on a family as follows:

 Bp(Dg,r)⊆⋃iBp(Dgi,βr). (3)

We now define particular persistence diagrams which all lie in different elements of the cover (3). For any , we let denote the persistence diagram containing only the point . It is clear that each is in . See Figure 3.

Moreover, since , it also follows that .

Hence, according to (3), for each there exists an integer such that . Finally, note that . Indeed, assuming that there are such that , and since the distance between and is always obtained by matching their points to the diagonal, we reach a contradiction with the following application of the triangle inequality:

 dp(Dg′j,Dg′j′)=21pr2≤dp(Dg′j,Dgij)+dp(Dgij,Dgij′)+dp(Dgij′,Dg′j′)<2βr=r.

This observation shows that there are at least different open balls in the cover (3), which concludes the proof. ∎

The following theorem is then a simple consequence of Lemma 4.3 and Proposition 4.2:

###### Theorem 4.4.

Let and . Then, for any and , there is no bi-Lipschitz embedding between and .

Interestingly, the integers and are independent in Theorem 4.4: even if one restricts to persistence diagrams with only one point, it is still impossible to find a bi-Lipschitz embedding into , whatever is.

## 5 Experiments

In this section, we illustrate our main results by computing the lower metric distortion bounds for the main stable feature maps in the literature. We use persistence diagrams with increasing number of points to experimentally observe the convergence of this bound to 0, as described in Theorem 3.5. More precisely, we generate 100 persistence diagrams for each cardinality in a range going from 10 to 1000 by uniformly sampling points in the unit upper half-square . See Figure 4 for an illustration.

Then, we consider the following feature maps:

• the Persistence Weighted Gaussian with unit bandwidth (PWG) [KFH18],

• the Persistence Scale Space with unit bandwidth (PSS) [RHB+15],

• the Landscape (LS) [BUB15],

• the Persistence Image with resolution and unit bandwidth (IM) [AEK+17]

• the Topological Vector with 10 dimensions (TV) [COO15],

Since most of these feature maps enjoy stability properties with respect to the first diagram distance , we compute the ratios between the metrics in the Hilbert spaces corresponding to these feature maps and . Moreover, we also look at the ratio induced by the square root of the Sliced Wasserstein distance (SW) [CCO17], as suggested by Remark 2.5. All feature maps were computed with the sklearn-tda library, which uses Hera [KMN17] as backend to compute the first diagram distances between pairs of persistence diagrams. These ratios are then displayed as boxplots in Figure 5.

It is clear from Figure 5 that the extreme values of these ratios (the upper tail of the ratio distributions) increase with the cardinality of the persistence diagrams, as expected from Theorem 3.5. This is especially interesting in the case of the Sliced Wasserstein distance since the question whether the lower bound that was proved in [CCO17], which increases with the number of points in the diagrams, was tight or not, i.e., if a lower bound which is oblivious to the number of points could be derived, is still open. Hence, it seems from Figure 5 that this is not the case empirically. It is also interesting to notice that the divergence speed of these ratios differ from a feature map to another. More precisely, it seems like the metric distortion bounds increase linearly with the cardinalities for the TV and LS feature maps and the Sliced Wasserstein distance, while it is increasing at a much lower speed for the other feature maps.

## 6 Conclusion

In this article, we provided two important theoretical results about the embedding of persistence diagrams in separable Hilbert spaces, which is a common technique in TDA to feed machine learning algorithms with persistence diagrams. Indeed, most of the recent attempts have defined feature maps for persistence diagrams into Hilbert spaces and showed these maps were stable with respect to the first diagram distance, and conjectured whether a lower bound holds as well or not. In this work, we proved that this is never the case if the Hilbert space is finite dimensional, and that such a lower bound has to go to zero with the number of points for most other feature maps in the literature. We also provided experiments that confirm this result, by showing a clear increase of the metric distortion with the number of points for persistence diagrams generated uniformly in the unit upper half-square.

## References

• [AEK+17] H. Adams, T. Emerson, M. Kirby, R. Neville, C. Peterson, P. Shipman, S. Chepushtanova, E. Hanson, F. Motta, and L. Ziegelmeier (2017) Persistence Images: A Stable Vector Representation of Persistent Homology. Journal of Machine Learning Research 18 (8), pp. 1–35. Cited by: §1, §1, 4th item.
• [BL15] U. Bauer and M. Lesnick (2015) Induced matchings and the algebraic stability of persistence barcodes. Journal of Computational Geometry 6 (2), pp. 162–191. Cited by: §1.
• [BUB15] P. Bubenik (2015) Statistical Topological Data Analysis using Persistence Landscapes. Journal of Machine Learning Research 16, pp. 77–102. Cited by: §1, §1, §3, §3, 3rd item.
• [CAR09] G. Carlsson (2009) Topology and data. Bulletin of the American Mathematical Society 46, pp. 255–308. Cited by: §1.
• [CCO17] M. Carrière, M. Cuturi, and S. Oudot (2017) Sliced Wasserstein Kernel for Persistence Diagrams. In Proceedings of the 34th International Conference on Machine Learning, Cited by: §1, §1, §2.1, Remark 2.5, §5, §5.
• [COO15] M. Carrière, S. Oudot, and M. Ovsjanikov (2015) Stable Topological Signatures for Points on 3D Shapes. Computer Graphics Forum 34. Cited by: §1, §1, 5th item.
• [CdG+16] F. Chazal, V. de Silva, M. Glisse, and S. Oudot (2016) The Structure and Stability of Persistence Modules. Springer. Cited by: §1, §2.1, Theorem 2.2.
• [CEH07] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer (2007) Stability of Persistence Diagrams. Discrete and Computational Geometry 37 (1), pp. 103–120. Cited by: §1, Theorem 2.2.
• [DF15] B. Di Fabio and M. Ferri (2015) Comparing persistence diagrams through complex vectors. In Image Analysis and Processing — ICIAP 2015, pp. 294–305. Cited by: §1.
• [EH10] H. Edelsbrunner and J. Harer (2010) Computational Topology: an introduction. AMS Bookstore. Cited by: §2.1.
• [HEI01] J. Heinonen (2001) Lectures on Analysis on Metric Spaces. Springer. Cited by: §4.
• [HKN+17] C. Hofer, R. Kwitt, M. Niethammer, and A. Uhl (2017) Deep Learning with Topological Signatures. In Advances in Neural Information Processing Systems 30, pp. 1633–1643. Cited by: §1, §1.
• [KAL18] S. Kališnik (2018) Tropical coordinates on the space of persistence barcodes. Foundations of Computational Mathematics. Cited by: §1.
• [KMN17] M. Kerber, D. Morozov, and A. Nigmetov (2017-09) Geometry helps to compare persistence diagrams. Journal of Experimental Algorithmics 22, pp. 1.4:1–1.4:20. Cited by: §5.
• [KFH16] G. Kusano, K. Fukumizu, and Y. Hiraoka (2016) Persistence Weighted Gaussian Kernel for Topological Data Analysis. In Proceedings of the 33rd International Conference on Machine Learning, pp. 2004–2013. Cited by: §1.
• [KFH18] G. Kusano, K. Fukumizu, and Y. Hiraoka (2018) Kernel method for persistence diagrams via kernel embedding and weight factor. Journal of Machine Learning Research 18 (189), pp. 1–41. Cited by: §1, §3, §3, 1st item.
• [OUD15] S. Oudot (2015) Persistence Theory: From Quiver Representations to Data Analysis. Mathematical Surveys and Monographs, American Mathematical Society. Cited by: §2.1, §2.1.
• [RHB+14] J. Reininghaus, S. Huber, U. Bauer, and R. Kwitt (2014) A Stable Multi-Scale Kernel for Topological Machine Learning. CoRR abs/1412.6821. Cited by: 1st item, 1st item.
• [RHB+15] J. Reininghaus, S. Huber, U. Bauer, and R. Kwitt (2015) A Stable Multi-Scale Kernel for Topological Machine Learning. In

IEEE Conference on Computer Vision and Pattern Recognition

,
Cited by: §1, §1, 2nd item.
• [ROB10] J. C. Robinson (2010) Dimensions, Embeddings, and Attractors. Cambridge Tracts in Mathematics, Vol. 186, Cambridge University Press. Cited by: Proposition 4.2.
• [TMM+14] K. Turner, Y. Mileyko, S. Mukherjee, and J. Harer (2014) Fréchet Means for Distributions of Persistence Diagrams. Discrete and Computational Geometry 52 (1), pp. 44–70. Cited by: Remark 2.4.