1 Introduction
(a)  (b)  (c)  (d) 
Regularization plays a crucial role in supervised learning. A successfully regularized model strikes a balance between a perfect description of the training data and the ability to generalize to unseen data. A common intuition for the design of regularzers is the Occam’s razor principle, where a regularizer enforces certain simplicity of the model in order to avoid overfitting. Classic regularization techniques include functional norms such as (Krishnapuram et al., 2005), (Tikhonov) (Ng, 2004) and RKHS norms (Schölkopf and Smola, 2002). Such norms produce a model with relatively less flexibility and thus is less likely to overfit.
A particularly interesting category of methods is inspired by the geometry. These methods design new penalty terms to enforce a geometric simplicity of the classifier. Some methods stipulate that similar data should have similar score according to the classifier, and enforce the smoothness of the classifier function (Belkin et al., 2006; Zhou and Schölkopf, 2005; Bai et al., 2016). Others directly pursue a simple geometry of the classifier boundary, i.e., the submanifold separating different classes (Cai and Sowmya, 2007; Varshney and Willsky, 2010; Lin et al., 2012, 2015)
. These geometrybased regularizers are intuitive and have been shown to be useful in many supervised and semisupervised learning settings. However, regularizing total smoothness of the classifier (or that of the classification boundary) is not always flexible enough to balance the tug of war between overfitting and overall accuracy. The key issue is that these measurement are usually structure agnostic. For example, in Figure
1, a classifier may either overfit (as in (b)), or becomes too smooth and lose overall accuracy (as in (c)).In this paper, we propose a new direction to regularize the “simplicity” of a classifier – Instead of using geometry such as total curvature / smoothness, we directly enforce the “simplicity” of the classification boundary, by regularizing over its topological complexity. (Here, we take a similar functional view as Bai et al. (2016) and consider the classifier boundary as the valued level set of the classifier function ; see Figure 2 for an example.) Furthermore, our measurement of topological complexity incorporates the importance of topological structures, e.g., connected components, handles, in a meaningful manner, and provides a direct control over spurious topological structures. This new structural simplicity can be combined with other regularizing terms (say geometrybased ones or functional norms) to train a better classifier. See Figure 1 (a) for an example, where the classifier computed with topological regularization achieves a better balance between overfitting and classification accuracy.
To design a good topological regularizer, there are two key challenges. First, we want to measure and incorporate the significance of different topological structures. For example, in Figure 2 (a), we observe three connected components in the classification boundary (red). The “importance” of the two smaller components (loops) are different despite their similar geometry. The component on the left exists only due to a few training data and thus are much less robust to noise than the one on the right. Leveraging several recent developments in the field of computational topology (Edelsbrunner et al., 2000; Bendich et al., 2010, 2013), we quantify such “robustness” of each topological structure and define our topological penalty as the sum of the squared robustness over all topological structures from the classification boundary.
A bigger challenge is to compute the gradient of the proposed topological penalty function. In particular, the penalty function crucially depends on locations and values of critical points (e.g., extrema and saddles) of the classifier function. But there are no closed form solutions for these critical points. To address this issue, we propose to discretize the domain and use a piecewise linear approximation of the classifier function as a surrogate function. We prove in Section 3
that by restricting to such a surrogate function, the topological penalty is differentiable almost everywhere. We propose an efficient algorithm to compute the gradient and optimize the topological penalty. We apply the new regularizer to a kernel logistic regression model and show in Section
4 how it outperforms others on various synthetic and realworld datasets.(a)  (b)  (c) 
In summary, our contributions are as follows:

[topsep=1pt, partopsep=1pt,itemsep=0pt]

We propose the novel view of regularizing the topological complexity of a classifier, and present the first work on developing such a topological penalty function;

We propose a method to compute the gradient of our topological penalty. By restricting to a surrogate piecewise linear approximation of the classifier function, we prove the gradient exists almost everywhere and is tractable;

We instantiate our topological regularizer on a kernel classifier. We provide experimental evidence of the effectiveness of the proposed method on several synthetic and realworld datasets.
Our novel topological penalty function and the novel gradient computation method open the door for a seamless coupling of topological information and learning methods. For computational efficiency, in this paper, we focus on the simplest type of topological structures, i.e., connected components. The framework can be extended to more sophisticated topological structures, e.g., handles, voids, etc.
Related work. The topological summary called persistence diagram/barcode (will be defined in the supplemental material) can capture the global structural information of the data in a multiscale manner
. It has been used in unsupervised learning, e.g., clustering
(Chazal et al., 2013; Ni et al., 2017). In supervised setting, topological information has been used as powerful features. The major challenge is the metric between such topological summaries of different data is not standard Euclidean. Adams et al. (2017)proposed to directly vectorize such information. Bubenik
(2015) proposed to map the topological summary into a Banach space so that statistical reasoning can be carried out (Chazal et al., 2014). To fully leverage the topological information, various kernels (Reininghaus et al., 2015; Kwitt et al., 2015; Kusano et al., 2016; Carrière et al., 2017; Zhu et al., 2016) have been proposed to approximate their distance. Hofer et al. (2017)proposed to use the topological information as input for deep convolutional neural network. Perhaps the closest to us are
(Varshney and Ramamurthy, 2015; Ramamurthy et al., 2018), which compute the topological information of the classification boundary. All these methods use topological information as an observation/feature of the data. To the best of our knowledge, our method is the first to leverage the topological information as a prior for training the classifier.In the context of computer vision, topological information has been incorporated as constraints in discrete optimization. Connectivity constraints can be used to improve the image segmentation quality, especially when the objects of interest are in elongated shapes. However in general, topological constraints, although intuitive, are highly complex and too expensive to be fully enforced in the optimization procedure
(Vicente et al., 2008; Nowozin and Lampert, 2009). One has to resort to various approximation schemes (Zeng et al., 2008; Chen et al., 2011; Stühmer et al., 2013; Oswald et al., 2014).2 Level Set, Topology, and Robustness
To illustrate the main ideas and concepts, we first focus on the binary classification problem^{1}^{1}1For the multilabel classification, we will use multiple onevsall binary classifiers (see Section 3). with a dimensional feature space, . W.l.o.g. (without loss of generality), we assume is a dimensional hypercube, and thus is compact and simply connected. A classifier function is a smooth scalar function, , and the prediction for any training/testing data is . We are interested in describing the topology and geometry of the classification boundary of , i.e., the boundary between the positive and negative classification regions. Formally, the boundary is the level set of at value zero, i.e., the set of all points with function value zero
W.l.o.g., we assume is a dimensional manifold, possibly with multiple connected components^{2}^{2}2The degenerate case happens if passes through critical points, e.g., saddles, minima, or maxima.. In Figure 2(a), the red curves represent the boundary , which is a onedimensional manifold consisting of three connected components (one Ushaped open curve and two closed loops). Please note that a level set has been used extensively in the image segmentation tasks (Osher and Fedkiw, 2006; Szeliski, 2010).
(a)  (b)  (c)  (d) 
For ease of exposition, we only focus on the simplest type of topological structures, i.e., connected components. For the rest of the paper, unless specifically noted, we will use “connected components” and “topological structures” interchangeably. Classification boundaries of higher dimension may have other types of topological structures, e.g., handles, voids, etc. See Figure 2(c) for the boundary of a 3D classifier. Our method can be extended to these structures, as the mathematical underpinning is well understood (Edelsbrunner and Harer, 2010).
Robustness. Our goal is to use the topological regularizer to simplify the topology of a classifier boundary. To achieve so, we need a way to rank the significance of different topological structures. The measure should be based on the underlying classifier function. To illustrate the intuition, recall the example in Figure 2(a). To rank the three connected components of the classifier boundary , simply inspecting the geometry is insufficient. The two loops have similar size. However, the left loop is less stable as it is caused by only a few training samples (two positive samples inside the loop and two negative samples between the loop and the Ushaped curve). Instead, the difference between the two loops can be observed by studying the graph of the function (Figure 2(b)). Compared to the right loop, the basin inside the left loop is shallower and the valley nearby is closer to the sea level (zero).
We propose to measure the significance of a component of interest, , as the minimal amount of necessary perturbation the underlying classifier needs in order to “shrug off” from the zerovalued level set. Formally, we define the robustness for each connected component of the boundary as follows.
Definition 1 (Robustness).
The robustness of is , so that is not a connected component of the boundary of the perturbed function . The distance between and its perturbed version is via the norm, i.e., .
In the example of Figure 2, there are two options to perturb and so the left loop can be eliminated:
Option 1. Remove the left loop completely by increasing the function value of all points within it to , where is an infinitesimally small positive value. For the new function , the zerovalued level set only consists of the Ushaped curve and the right loop. See Figure 3(a) and (b) for a zoomedin view of and its graph. In such case, the expense is simply plus the depth of the basin, i.e., the absolute function value of the local minimum inside the left loop (left yellow marker in Figure 2(b)). The actual expense is .
Option 2. Merge the left loop with the Ushaped curve by finding a path connecting them and lowering the function values along the path to . There are many paths achieving the goal. But note that all paths connecting them have to go at least as high as the nearby saddle point (left green marker in Figure 2(b)). Therefore, we choose a path passing the saddle point and has the saddle as the highest point. By changing function values along the path to , we get the new function . In the zerovalued level set of , , the left loop is merged with the Ushaped curve via a “pipe”. See Figure 3(c) and (d) for a zoomedin view of and its graph. In this case, the expense is plus the highest height of the path, namely, the function value of the saddle point. .
To optimize the cost to remove the left loop, we choose the second option. The corresponding expense gives us the robustness of this left component , . In practice and for the rest of the paper, we drop for convenience. Note for the right loop, its robustness is much higher as values of the associated critical points are further away from the value zero. In fact, its robustness is .
(a)  (b) 
In this example, we observe that the robustness of a component crucially depends on the function values of two critical points, a minimum and a saddle point . This is not a coincidence: this pairing () is in fact a socalled persistence pairing computed based on the theory of persistent homology (Edelsbrunner et al., 2002; Carlsson et al., 2009; Bendich et al., 2013). We skip the description here, and refer the readers to the supplemental material for some details. We provide the following theorem, and the proof can be found in the supplemental material.
Theorem 2.1.
Let be a Morse function defined on a dimensional hypercube . Then, there is a set of pairings of critical points of , , such that there is a onetoone correspondence between the set of connected components of the boundary and pairings in . In particular, under this bijection, suppose a component corresponds to a pair of critical points . Then its robustness .
Furthermore, can be computed by computing the socalled 0th dimensional persistent homology induced by sublevel set filtration w.r.t. function and w.r.t. function , respectively.
The second part of the theorem outlines an efficient algorithm for computing robustness. Below we present the details of the algorithm, and discuss its complexity.
Algorithm. According to the theorem, we need to compute 0dimensional persistent homology for both and in order to collect all necessary pairings of critical points. We first describe now how to compute the portion of pairings in coming from the 0th persistent homology w.r.t. , i.e., . A symmetric procedure will then be performed for to compute . For computation purpose, we need a discretization of the domain and the classifier function are evaluated at vertices of this discretization. For lowdimensional feature space, e.g., 2D, we take a uniform sampling of and evaluate the function at all samples. A grid graph is built using these samples as vertices, and we can evaluate the function value of at all vertices . See Figure 4(a).
Next, we build a merging tree as well as a collection of pairings as follows. We sort all vertices according to their function values
. Add these vertices onebyone according to the order (from small to large). At any moment
, we maintain the spanning forest for all the vertices that we already swept. Furthermore, each tree in the spanning forest is represented by the global minimum in this tree. When two trees and (associated with global minima and , respectively) merge upon the processing of node , then the resulting tree is represented by the lower of the two minima and , say , and we add the pairing to . Intuitively, the tree is created when we sweep past , and is “killed” (merged to an “older” tree created at with ).After all vertices are added, the merging tree is constructed. See Figure 4(b), where the height of tree nodes corresponds to their function values. The entire process can be done by a standard unionfind data structure with a slight modification to maintain the minimum of each set (tree) in time once the vertices are sorted (Edelsbrunner and Harer, 2010), where is the number of edges in grid graph , is the inverse Ackermann function.
This merging tree and the set of pairings encode all the information we need. If we cut the tree at level zero, the remaining subtrees onetoone correspond to different connected components of the boundary. Indeed, now set : each of the pair in corresponds to one component in the separation boundary ; the tree containing it was created at before but merged at after . In Figure 4(b), the green, yellow, and turquoise subtrees correspond to the three connected components of the boundary of the function in Figure 4(a). Notice that both the green and the turquoise trees/components have their corresponding pairings in , and , respectively. However, the yellow subtree/component does not have a pairing in as it is not merged into anyone. The pairing for the yellow component will be , in which is the global minimum and is the global maximum.
We perform the same procedure but for function , and collect . Finally, (via the proof of Theorem 2.1 in the supplemental material), the set of critical pairs corresponding to the set of components in is . The complexity of the algorithm is . The first term comes from the sorting of all vertices in . The second term is due to the mergetree building using unionfind.
The gridgraph discretization is only feasible for lowdimensional feature space. For high dimension, we compute a discretization of the feature space
using a knearestneighbor graph (KNN)
: Nodes of this graph are all training data points. Thus the extracted critical points are only training data points. We then perform the same procedure to compute as described above using this . Our choice is justified by experimental evidences that the KNN graph provides sufficient approximation for the topological computation in the context of clustering (Chazal et al., 2013; Ni et al., 2017).3 Topological Penalty and Gradient
Based on the robustness measure, we will introduce our topological penalty below. To use it in the learning context, a crucial step is to derive the gradient. However, the mapping from input data to persistence pairings ( in Theorem 2.1) is highly nonlinear without an explicit analytical representation. Hence it is not clear how to compute the gradient of a topological penalty function in its original format. Our key insight is that, if we approximate the classifier function by a piecewiselinear function, then we can derive gradients for the penalty function, and perform gradientdescent optimization. Our topological penalty is implemented on a kernel logistic regression classifier, and we also show how to extend it to multilabel settings.
Given a data set and a classifier parameterized by , we define the objective function to optimize as the weighted sum of the perdata loss and our topological penalty.
(3.1) 
in which is the weight of the topological penalty, . And is the standard perdata loss, e.g., crossentropy loss, quadratic loss or hinge loss.
Our topological penalty, , aims to eliminating the connected components of the classifier boundary. In the example of Figure 2(a), it may help eliminating both the left and the right loops, but leaving the Ushaped curve alone as it is the most robust one. Recall each topological structure of the classification boundary, , is associated with two critical points and , and its robustness . We define the topological penalty in Equation (3.1) as the sum of squared robustness, formally,
Here is the set of all connected components of except for the most robust one. In Figure 2(a), only consists of the left and the right loops. We do not include the most robust component, as there should be at least one component left in the classifier boundary.
Gradient. A crucial yet challenging task is to compute the gradient of such topological penalty. In fact, there has not been any gradient computation for topologyinspired measurement. A major challenge is the lack of a closed form solution for the critical points of any nontrivial function. Previous results show that even a simple mixture of isotropic Gaussians can have exponentially many critical points (Edelsbrunner et al., 2013; CarreiraPerpiñán and Williams, 2003).
In this paper, we propose a solution that circumvents the direct computation of critical points in the continuous domain. The key idea is to use a piecewise linear approximation of the classifier function. Recall we discretize the feature space into a grid graph, , and only evaluate classifier function values at a finite set of locations/points. Now consider the piecewise linear function which agrees with at all sample points in
, but linearly interpolates along edges. We show that restricting to such piecewise linear functions, the gradient of
is indeed computable.Theorem 3.1.
Using the piecewise linear approximation , the topological penalty is differentiable almost everywhere.
Proof.
For the piecewise linear approximate , all critical points have to come from the set of vertices of the discretization. Their pairing and correspondence to the connected components can be directly computed using the algorithm in Section 2.
We first assume has unique function values at all points in , i.e, , . Let be the lowerbound of the difference between the absolute function values of elements in , as well as the absolute function values of all vertices: To prove our theorem, we show that there exists a small neighborhood of the function , so that for any function in this neighborhood, the critical points and their pairings remain unchanged. To see this, we note that the any function in such neighborhood of is also piecewise linear functions realized on the same graph . We define the neighborhood to be a radius open ball in terms of norm, formally, For any , , we have the following three facts:

[topsep=0pt, itemsep = 0pt]
 Fact 1.

if and only if .
 Fact 2.

if and only if , and if and only if
 Fact 3.

if and only if .
The first two facts guarantee that the ordering of elements in induced by their function values are the same for and . Consequently, the filtration of all elements of induced by and are the same. By definition of persistent homology, the persistence pairs (of critical points) are identical for and . In other words, the pair associated with each connected component , are the same for both and .
Furthermore, the third condition guarantees that for each , if and only if . If for , is the critical point that accounts for the robustness, i.e., , then also accounts for for function . Denote by as such critical point. Thus , in which the critical point remains a constant for any within the small neighborhood of .
With constant ’s, and knowing that and agree at all elements of , the gradient is straightforward
Note that this gradient is intractable without the surrogate piecewise linear function ; for the original classifier , changes according to in a complex manner.
Finally, we note that it is possible that elements in may have the save function values or the same absolute function values. In such cases, may not be uniquely defined and the gradient does not exist. However, these events constitute a measure zero subspace of functions and do not happen generically. In other words,
is a piecewise smooth loss function over the space of all piecewise linear functions. It is differentiable
almost everywhere. ∎Intuition of the gradient. During the optimization process, we take the opposite direction of the gradient, i.e., . For each component , taking the direction is essentially pushing the function value of the critical point closer to zero. In the example of Figure 2, for the left loop, the gradient decent will push the function value of the saddle point (left green marker) closer to zero, effectively dragging the path down as in Figure 3(c) and (d). If it is the case when is the minimum, the gradient descent will increase the function value of the minimum, effectively filling the basin as in Figure 3(a) and (b).
Instantiating on kernel machines. In principle, our topological penalty can be incorporated with any classifier. Here, we combine it with a kernel logistic regression classifier to demonstrate its advantage. We first present details for a binary classifier. We will extend it to multilabel settings. For convenience, we abuse the notation, drop and only use .
In a kernel logistic regression, the prediction function is . The dim feature consists of the Gaussian kernel distance between and the training data. The perdata loss is the standard crossentropy loss and its gradient can be found in a standard textbook (Bishop, 2006).
Next we derive the gradient for the topological penalty. First we need to modify the classifier slightly. Notice the range of is between zero and one, and the prediction is . To fit our setting in which the zerovalued level set is the classification boundary, we use a new function as the input for the topological penalty. The gradient is
Our overall algorithm repeatedly computes the gradient of the objective function (gradient of crossentropy loss and gradient of topological penalty), and update the parameters accordingly, until it converges. At each iteration, to compute the gradient of the topological penalty, we compute the critical point ’s for all connected components, using the algorithm introduced in Section 2.
Multilabel settings. For multilabel classification with classes, we use the multinomial logistic regression classifier with parameters . The perdata loss is again the standard crossentropy loss. For the topological penalty, we create different scalar functions . If , we classify as label . The 0valued level set of is the classification boundary between label and all others. Summing the total robustness over all different ’s give us the multilabel topological penalty. We omit the derivation of the gradients due to space constraint. The computation is similar to binarylabeled setting, except that at each iteration, we need to compute the persistence pairs for all the functions.
4 Experiments
We test our method (TopoReg) on multiple synthetic datasets and real world datasets. The weight of the topological penalty, and the Gaussian kernel width are turned via crossvalidation. To compute topological information requires discretization of the domain. For 2D data, we normalize the data to fit the unit square , and discretize the square into a grid with 300
300 vertices. For highdimensional data, we use the KNN graph with
.Synthetic  
KNN  LG  SVM  EE  DGR  KLR  TopoReg  
Blob2 (500,5)  7.61  8.20  7.61  8.41  7.41  7.80  7.20 
Moons (500,2)  20.62  20.00  19.80  19.00  19.01  18.83  18.63 
Moons (1000,2,Noise 0%)  19.30  19.59  19.89  17.90  19.20  17.80  17.60 
Moons (1000,2,Noise 5%)  21.60  19.29  19.59  22.00  22.30  19.00  19.00 
Moons (1000,2,Noise 10%)  21.10  19.19  19.89  24.40  26.30  20.00  19.70 
Moons (1000,2,Noise 20%)  23.00  19.79  19.40  30.60  30.20  19.50  19.40 
AVERAGE  18.87  17.68  17.70  20.39  20.74  21.63  16.92 
UCI  
KNN  LG  SVM  EE  DGR  KLR  TopoReg  
SPECT (267,22)  17.57  17.20  18.68  16.38  23.92  18.31  17.54 
Congress (435,16)  5.04  4.13  4.59  4.59  4.80  4.12  4.58 
Molec. (106,57)  24.54  19.10  19.79  17.25  16.32  19.10  12.62 
Cancer (286,9)  29.36  28.65  28.64  28.68  31.42  29.00  28.31 
Vertebral (310,6)  15.47  15.46  23.23  17.15  13.56  12.56  12.24 
Energy (768,8)  0.78  0.65  0.65  0.91  0.78  0.52  0.52 
AVERAGE  15.46  14.20  15.93  14.16  15.13  13.94  11.80 
Biomedicine  
KNN  LG  SVM  EE  DGR  KLR  TopoReg  
KIRC (243,166)  30.12  28.87  32.56  31.38  35.50  31.38  26.81 
fMRI (1092,19)  46.70  74.91  74.08  82.51  31.32  34.07  33.24 
The mean and standard deviation error rate of different methods.
Baselines.
We compare our method with several baselines: knearestneighbor classifier (KNN), logistic regression (LG), Support Vector Machine (SVM), and Kernel Logistic Regression (KLR) with functional norms (
and ) as regularizers. We also compare with two stateoftheart methods based on geometricregularizers: the Euler’s Elastica classifier (EE) (Lin et al., 2015) and the Classifier with Differential Geometric Regularization (DGR) (Bai et al., 2016). All relevant hyperparameters are tuned using crossvalidation.
For every dataset and each method, we randomly divide the datasets into 6 folds. Then we use each of the 6 folds as testing set, while doing a 5fold cross validation on the rest 5 folds data to find the best hyperparameters. Once the best hyperparameters are found, we train on the entire 5 folds data and test on the testing set.
Data. In order to thoroughly evaluate the behavior of our model, especially in large noise regime, we created synthetic data with various noise levels. Beside feature space noise, we also inject different levels of label noise, e.g., randomly perturb labels of 0%, 5%, 10% and 20% of the training data. We also evaluate our method on real world data. We use several UCI datasets with various sizes and dimensions to test our method (Lichman, 2013). In addition, we use two biomedical datasets. The first is the kidney renal clear cell carcinoma cancer (KIRC) dataset (Yuan et al., 2014) extracted from the Cancer Genome Atlas project (TCGA) (Sharpless and others, 2018). The features of the dataset are the protein expression measured on the MD Anderson Reverse Phase Protein Array Core platform (RPPA). The second dataset is a taskevoked functional MRI images, which has 19 dimensions (corresponding to activities at 19 brain ROIs) and 6 labels (corresponding to 6 different tasks) (Ni et al., 2018).
The results are reported in Table 1. We also report the average performance over each category (AVERAGE). The two numbers next to each dataset name are the data size and the dimension , respectively. The average running time over all the datasets for our method is 2.08 seconds.
Discussions. Our method generally outperforms existing methods on datasets in Table 1. More importantly, we note that our method also provides consistently best or close to best performance among all approaches tested. (For example, while EE performs well on some datasets, its performance can be significantly worse than the best for some other datasets.)
On synthetic data, we found that TopoReg has a bigger advantage on relatively noisy data. This is expected. Our method provides a novel way to topologically simplify the global structure of the model, without having to sacrifice too much of the flexibility of the model. Meanwhile, to cope with large noise, other baseline methods have to enforce an overly strong global regularization in a structure agnostic manner. We also observe that TopoReg performs relatively stable when label noise is large, while the other geometric regularizers are much more sensitive to label noise. See Figure 5 for a comparison. We suspect this is because the other geometric regularizers are more sensitive to the initialization and tend to stuck in bad local optima.
References

Adams et al. (2017)
Adams, H., Emerson, T., Kirby, M., Neville, R., Peterson, C., Shipman, P.,
Chepushtanova, S., Hanson, E., Motta, F., and Ziegelmeier, L. (2017).
Persistence images: A stable vector representation of persistent
homology.
The Journal of Machine Learning Research
, 18(1):218–252.  Bai et al. (2016) Bai, Q., Rosenberg, S., Wu, Z., and Sclaroff, S. (2016). Differential geometric regularization for supervised learning of classifiers. In International Conference on Machine Learning, pages 1879–1888.
 Belkin et al. (2006) Belkin, M., Niyogi, P., and Sindhwani, V. (2006). Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434.
 Bendich et al. (2010) Bendich, P., Edelsbrunner, H., and Kerber, M. (2010). Computing robustness and persistence for images. IEEE transactions on visualization and computer graphics, 16(6):1251–1260.
 Bendich et al. (2013) Bendich, P., Edelsbrunner, H., Morozov, D., Patel, A., et al. (2013). Homology and robustness of level and interlevel sets. Homology, Homotopy and Applications, 15(1):51–72.
 Bishop (2006) Bishop, C. M. (2006). Pattern Recognition and Machine Learning, volume 4. springer New York.
 Bubenik (2015) Bubenik, P. (2015). Statistical topological data analysis using persistence landscapes. The Journal of Machine Learning Research, 16(1):77–102.
 Cai and Sowmya (2007) Cai, X. and Sowmya, A. (2007). Level learning set: A novel classifier based on active contour models. In Proc. European Conf. on Machine Learning (ECML), pages 79–90.
 Carlsson and de Silva (2010) Carlsson, G. and de Silva, V. (2010). Zigzag persistence. Foundations of Computational Mathematics, 10(4):367–405.
 Carlsson et al. (2009) Carlsson, G., de Silva, V., and Morozov, D. (2009). Zigzag persistent homology and realvalued functions. In Proc. 25th Annu. ACM Sympos. Comput. Geom., pages 247–256.
 CarreiraPerpiñán and Williams (2003) CarreiraPerpiñán, M. Á. and Williams, C. K. (2003). On the number of modes of a gaussian mixture. In International Conference on ScaleSpace Theories in Computer Vision, pages 625–640. Springer.
 Carrière et al. (2017) Carrière, M., Cuturi, M., and Oudot, S. (2017). Sliced wasserstein kernel for persistence diagrams. In International Conference on Machine Learning, pages 664–673.

Chazal et al. (2014)
Chazal, F., Glisse, M., Labruère, C., and Michel, B. (2014).
Convergence rates for persistence diagram estimation in topological data analysis.
In International Conference on Machine Learning (ICML), pages 163–171.  Chazal et al. (2013) Chazal, F., Guibas, L. J., Oudot, S. Y., and Skraba, P. (2013). Persistencebased clustering in riemannian manifolds. Journal of the ACM (JACM), 60(6):41.
 Chen et al. (2011) Chen, C., Freedman, D., and Lampert, C. H. (2011). Enforcing topological constraints in random field image segmentation. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2089–2096. IEEE.
 CohenSteiner et al. (2009) CohenSteiner, D., Edelsbrunner, H., and Harer, J. (2009). Extending persistence using Poincaré and Lefschetz duality. Foundations of Computational Mathematics, 9(1):79–103.
 Edelsbrunner et al. (2013) Edelsbrunner, H., Fasy, B. T., and Rote, G. (2013). Add isotropic gaussian kernels at own risk: More and more resilient modes in higher dimensions. Discrete & Computational Geometry, 49(4):797–822.
 Edelsbrunner and Harer (2010) Edelsbrunner, H. and Harer, J. (2010). Computational Topology: an Introduction. AMS.
 Edelsbrunner et al. (2000) Edelsbrunner, H., Letscher, D., and Zomorodian, A. (2000). Topological persistence and simplification. In Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on, pages 454–463. IEEE.
 Edelsbrunner et al. (2002) Edelsbrunner, H., Letscher, D., and Zomorodian, A. (2002). Topological persistence and simplification. Discrete Comput. Geom., 28:511–533.
 Hofer et al. (2017) Hofer, C., Kwitt, R., Niethammer, M., and Uhl, A. (2017). Deep learning with topological signatures. In Advances in Neural Information Processing Systems, pages 1633–1643.
 Krishnapuram et al. (2005) Krishnapuram, B., Carin, L., Figueiredo, M., and Hartemink, A. (2005). Learning sparse bayesian classifiers: multiclass formulation, fast algorithms, and generalization bounds. IEEE. Trans. Pattern. Anal. Mach. Intell, 32.
 Kusano et al. (2016) Kusano, G., Hiraoka, Y., and Fukumizu, K. (2016). Persistence weighted gaussian kernel for topological data analysis. In International Conference on Machine Learning, pages 2004–2013.
 Kwitt et al. (2015) Kwitt, R., Huber, S., Niethammer, M., Lin, W., and Bauer, U. (2015). Statistical topological data analysisa kernel perspective. In Advances in neural information processing systems, pages 3070–3078.
 Lichman (2013) Lichman, M. (2013). UCI machine learning repository.
 Lin et al. (2015) Lin, T., Xue, H., Wang, L., Huang, B., and Zha, H. (2015). Supervised learning via euler’s elastica models. Journal of Machine Learning Research, 16:3637–3686.
 Lin et al. (2012) Lin, T., Xue, H., Wang, L., and Zha, H. (2012). Total variation and Euler’s elastica for supervised learning. Proc. International Conf. on Machine Learning (ICML).
 Ng (2004) Ng, A. Y. (2004). Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twentyfirst international conference on Machine learning, page 78. ACM.
 Ni et al. (2017) Ni, X., Quadrianto, N., Wang, Y., and Chen, C. (2017). Composing tree graphical models with persistent homology features for clustering mixedtype data. In International Conference on Machine Learning, pages 2622–2631.
 Ni et al. (2018) Ni, X., Yan, Z., Wu, T., Fan, J., and Chen, C. (2018). A regionofinterestreweight 3d convolutional neural network for the analytics of brain information processing. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 302–310. Springer.
 Nowozin and Lampert (2009) Nowozin, S. and Lampert, C. H. (2009). Global connectivity potentials for random field models. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 818–825. IEEE.
 Osher and Fedkiw (2006) Osher, S. and Fedkiw, R. (2006). Level set methods and dynamic implicit surfaces, volume 153. Springer Science & Business Media.
 Oswald et al. (2014) Oswald, M. R., Stühmer, J., and Cremers, D. (2014). Generalized connectivity constraints for spatiotemporal 3d reconstruction. In European Conference on Computer Vision, pages 32–46. Springer.
 Ramamurthy et al. (2018) Ramamurthy, K., Varshney, K. R., and Mody, K. (2018). Topological data analysis of decision boundaries with applicaiton to model selection.
 Reininghaus et al. (2015) Reininghaus, J., Huber, S., Bauer, U., and Kwitt, R. (2015). A stable multiscale kernel for topological machine learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4741–4748.
 Schölkopf and Smola (2002) Schölkopf, B. and Smola, A. J. (2002). Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press.
 Sharpless and others (2018) Sharpless, N. E. and others (2018). TCGA: The Cancer Genome Atlas. Accessed: 10/01/2018.
 Stühmer et al. (2013) Stühmer, J., Schröder, P., and Cremers, D. (2013). Tree shape priors with connectivity constraints using convex relaxation on general graphs. In ICCV, volume 13, pages 1–8.
 Szeliski (2010) Szeliski, R. (2010). Computer vision: algorithms and applications. Springer Science & Business Media.
 Varshney and Willsky (2010) Varshney, K. and Willsky, A. (2010). Classification using geometric level sets. Journal of Machine Learning Research, 11:491–516.
 Varshney and Ramamurthy (2015) Varshney, K. R. and Ramamurthy, K. N. (2015). Persistent topology of decision boundaries. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 3931–3935. IEEE.
 Vicente et al. (2008) Vicente, S., Kolmogorov, V., and Rother, C. (2008). Graph cut based image segmentation with connectivity priors. In Computer vision and pattern recognition, 2008. CVPR 2008. IEEE conference on, pages 1–8. IEEE.
 Yuan et al. (2014) Yuan, Y., Van Allen, E. M., Omberg, L., Wagle, N., AminMansour, A., Sokolov, A., Byers, L. A., Xu, Y., Hess, K. R., Diao, L., et al. (2014). Assessing the clinical utility of cancer genomic and proteomic data across tumor types. Nature biotechnology, 32(7):644.
 Zeng et al. (2008) Zeng, Y., Samaras, D., Chen, W., and Peng, Q. (2008). Topology cuts: A novel mincut/maxflow algorithm for topology preserving segmentation in n–d images. Computer vision and image understanding, 112(1):81–90.
 Zhou and Schölkopf (2005) Zhou, D. and Schölkopf, B. (2005). Regularization on discrete spaces. In Pattern Recognition, pages 361–368. Springer.
 Zhu et al. (2016) Zhu, X., Vartanian, A., Bansal, M., Nguyen, D., and Brandl, L. (2016). Stochastic multiresolution persistent homology kernel. In IJCAI, pages 2449–2457.
 Zomorodian and Carlsson (2005) Zomorodian, A. and Carlsson, G. (2005). Computing persistent homology. Discrete & Computational Geometry, 33(2):249–274.
References

Adams et al. (2017)
Adams, H., Emerson, T., Kirby, M., Neville, R., Peterson, C., Shipman, P.,
Chepushtanova, S., Hanson, E., Motta, F., and Ziegelmeier, L. (2017).
Persistence images: A stable vector representation of persistent
homology.
The Journal of Machine Learning Research
, 18(1):218–252.  Bai et al. (2016) Bai, Q., Rosenberg, S., Wu, Z., and Sclaroff, S. (2016). Differential geometric regularization for supervised learning of classifiers. In International Conference on Machine Learning, pages 1879–1888.
 Belkin et al. (2006) Belkin, M., Niyogi, P., and Sindhwani, V. (2006). Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434.
 Bendich et al. (2010) Bendich, P., Edelsbrunner, H., and Kerber, M. (2010). Computing robustness and persistence for images. IEEE transactions on visualization and computer graphics, 16(6):1251–1260.
 Bendich et al. (2013) Bendich, P., Edelsbrunner, H., Morozov, D., Patel, A., et al. (2013). Homology and robustness of level and interlevel sets. Homology, Homotopy and Applications, 15(1):51–72.
 Bishop (2006) Bishop, C. M. (2006). Pattern Recognition and Machine Learning, volume 4. springer New York.
 Bubenik (2015) Bubenik, P. (2015). Statistical topological data analysis using persistence landscapes. The Journal of Machine Learning Research, 16(1):77–102.
 Cai and Sowmya (2007) Cai, X. and Sowmya, A. (2007). Level learning set: A novel classifier based on active contour models. In Proc. European Conf. on Machine Learning (ECML), pages 79–90.
 Carlsson and de Silva (2010) Carlsson, G. and de Silva, V. (2010). Zigzag persistence. Foundations of Computational Mathematics, 10(4):367–405.
 Carlsson et al. (2009) Carlsson, G., de Silva, V., and Morozov, D. (2009). Zigzag persistent homology and realvalued functions. In Proc. 25th Annu. ACM Sympos. Comput. Geom., pages 247–256.
 CarreiraPerpiñán and Williams (2003) CarreiraPerpiñán, M. Á. and Williams, C. K. (2003). On the number of modes of a gaussian mixture. In International Conference on ScaleSpace Theories in Computer Vision, pages 625–640. Springer.
 Carrière et al. (2017) Carrière, M., Cuturi, M., and Oudot, S. (2017). Sliced wasserstein kernel for persistence diagrams. In International Conference on Machine Learning, pages 664–673.

Chazal et al. (2014)
Chazal, F., Glisse, M., Labruère, C., and Michel, B. (2014).
Convergence rates for persistence diagram estimation in topological data analysis.
In International Conference on Machine Learning (ICML), pages 163–171.  Chazal et al. (2013) Chazal, F., Guibas, L. J., Oudot, S. Y., and Skraba, P. (2013). Persistencebased clustering in riemannian manifolds. Journal of the ACM (JACM), 60(6):41.
 Chen et al. (2011) Chen, C., Freedman, D., and Lampert, C. H. (2011). Enforcing topological constraints in random field image segmentation. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2089–2096. IEEE.
 CohenSteiner et al. (2009) CohenSteiner, D., Edelsbrunner, H., and Harer, J. (2009). Extending persistence using Poincaré and Lefschetz duality. Foundations of Computational Mathematics, 9(1):79–103.
 Edelsbrunner et al. (2013) Edelsbrunner, H., Fasy, B. T., and Rote, G. (2013). Add isotropic gaussian kernels at own risk: More and more resilient modes in higher dimensions. Discrete & Computational Geometry, 49(4):797–822.
 Edelsbrunner and Harer (2010) Edelsbrunner, H. and Harer, J. (2010). Computational Topology: an Introduction. AMS.
 Edelsbrunner et al. (2000) Edelsbrunner, H., Letscher, D., and Zomorodian, A. (2000). Topological persistence and simplification. In Foundations of Computer Science, 2000. Proceedings. 41st Annual Symposium on, pages 454–463. IEEE.
 Edelsbrunner et al. (2002) Edelsbrunner, H., Letscher, D., and Zomorodian, A. (2002). Topological persistence and simplification. Discrete Comput. Geom., 28:511–533.
 Hofer et al. (2017) Hofer, C., Kwitt, R., Niethammer, M., and Uhl, A. (2017). Deep learning with topological signatures. In Advances in Neural Information Processing Systems, pages 1633–1643.
 Krishnapuram et al. (2005) Krishnapuram, B., Carin, L., Figueiredo, M., and Hartemink, A. (2005). Learning sparse bayesian classifiers: multiclass formulation, fast algorithms, and generalization bounds. IEEE. Trans. Pattern. Anal. Mach. Intell, 32.
 Kusano et al. (2016) Kusano, G., Hiraoka, Y., and Fukumizu, K. (2016). Persistence weighted gaussian kernel for topological data analysis. In International Conference on Machine Learning, pages 2004–2013.
 Kwitt et al. (2015) Kwitt, R., Huber, S., Niethammer, M., Lin, W., and Bauer, U. (2015). Statistical topological data analysisa kernel perspective. In Advances in neural information processing systems, pages 3070–3078.
 Lichman (2013) Lichman, M. (2013). UCI machine learning repository.
 Lin et al. (2015) Lin, T., Xue, H., Wang, L., Huang, B., and Zha, H. (2015). Supervised learning via euler’s elastica models. Journal of Machine Learning Research, 16:3637–3686.
 Lin et al. (2012) Lin, T., Xue, H., Wang, L., and Zha, H. (2012). Total variation and Euler’s elastica for supervised learning. Proc. International Conf. on Machine Learning (ICML).
 Ng (2004) Ng, A. Y. (2004). Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twentyfirst international conference on Machine learning, page 78. ACM.
 Ni et al. (2017) Ni, X., Quadrianto, N., Wang, Y., and Chen, C. (2017). Composing tree graphical models with persistent homology features for clustering mixedtype data. In International Conference on Machine Learning, pages 2622–2631.
 Ni et al. (2018) Ni, X., Yan, Z., Wu, T., Fan, J., and Chen, C. (2018). A regionofinterestreweight 3d convolutional neural network for the analytics of brain information processing. In International Conference on Medical Image Computing and ComputerAssisted Intervention, pages 302–310. Springer.
 Nowozin and Lampert (2009) Nowozin, S. and Lampert, C. H. (2009). Global connectivity potentials for random field models. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 818–825. IEEE.
 Osher and Fedkiw (2006) Osher, S. and Fedkiw, R. (2006). Level set methods and dynamic implicit surfaces, volume 153. Springer Science & Business Media.
 Oswald et al. (2014) Oswald, M. R., Stühmer, J., and Cremers, D. (2014). Generalized connectivity constraints for spatiotemporal 3d reconstruction. In European Conference on Computer Vision, pages 32–46. Springer.
 Ramamurthy et al. (2018) Ramamurthy, K., Varshney, K. R., and Mody, K. (2018). Topological data analysis of decision boundaries with applicaiton to model selection.
 Reininghaus et al. (2015) Reininghaus, J., Huber, S., Bauer, U., and Kwitt, R. (2015). A stable multiscale kernel for topological machine learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4741–4748.
 Schölkopf and Smola (2002) Schölkopf, B. and Smola, A. J. (2002). Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press.
 Sharpless and others (2018) Sharpless, N. E. and others (2018). TCGA: The Cancer Genome Atlas. Accessed: 10/01/2018.
 Stühmer et al. (2013) Stühmer, J., Schröder, P., and Cremers, D. (2013). Tree shape priors with connectivity constraints using convex relaxation on general graphs. In ICCV, volume 13, pages 1–8.
 Szeliski (2010) Szeliski, R. (2010). Computer vision: algorithms and applications. Springer Science & Business Media.
 Varshney and Willsky (2010) Varshney, K. and Willsky, A. (2010). Classification using geometric level sets. Journal of Machine Learning Research, 11:491–516.
 Varshney and Ramamurthy (2015) Varshney, K. R. and Ramamurthy, K. N. (2015). Persistent topology of decision boundaries. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 3931–3935. IEEE.
 Vicente et al. (2008) Vicente, S., Kolmogorov, V., and Rother, C. (2008). Graph cut based image segmentation with connectivity priors. In Computer vision and pattern recognition, 2008. CVPR 2008. IEEE conference on, pages 1–8. IEEE.
 Yuan et al. (2014) Yuan, Y., Van Allen, E. M., Omberg, L., Wagle, N., AminMansour, A., Sokolov, A., Byers, L. A., Xu, Y., Hess, K. R., Diao, L., et al. (2014). Assessing the clinical utility of cancer genomic and proteomic data across tumor types. Nature biotechnology, 32(7):644.
 Zeng et al. (2008) Zeng, Y., Samaras, D., Chen, W., and Peng, Q. (2008). Topology cuts: A novel mincut/maxflow algorithm for topology preserving segmentation in n–d images. Computer vision and image understanding, 112(1):81–90.
 Zhou and Schölkopf (2005) Zhou, D. and Schölkopf, B. (2005). Regularization on discrete spaces. In Pattern Recognition, pages 361–368. Springer.
 Zhu et al. (2016) Zhu, X., Vartanian, A., Bansal, M., Nguyen, D., and Brandl, L. (2016). Stochastic multiresolution persistent homology kernel. In IJCAI, pages 2449–2457.
 Zomorodian and Carlsson (2005) Zomorodian, A. and Carlsson, G. (2005). Computing persistent homology. Discrete & Computational Geometry, 33(2):249–274.
Appendix A Background: Persistent Homology
Persistent homology (Edelsbrunner et al., 2002; Zomorodian and Carlsson, 2005; Carlsson and de Silva, 2010; Carlsson et al., 2009) is a fundamental recent development in the field of computational topology, underlying many topological data analysis methods. Below, we provide an an intuitive description to help explain its role in measuring the robustness of topological features in the zeroth level set (the separation boundary) of classifier function .
Suppose we are given a space and a continuous function defined on it. To characterize and , imagine we now sweep the domain in increasing values. This gives rise to the following growing sequence of sublevel sets:
where is the sublevel set of at . We call it the sublevel set filtration of w.r.t. , which intuitively inspects from the point of view of function . During the sweeping process, sometimes, new topological features (homology classes), say, a new component or a handle, will be created. Sometimes an existing one will be killed, say a component either disappear or merged into another one, or a void is filled. It turns out that these changes will only happen when we sweep through a critical points of the function . The persistent homology tracks these topological changes, and pair up critical points into a collection of persistence pairings . Each pair are the critical points where certain topological feature is created and killed. Their function values and are referred to as the birth time and death time of this feature. The corresponding collection of pairs of (birthtime, deathtime) is called the persistence diagram, formally, . For each persistent pairing , its persistence is defined to be , which measures the lifetime (and thus importance) of the corresponding topological feature w.r.t. . A simple 1D example is given in Figure 6.
(a)  (b) 
The above description is the tranditional persistence (induced by the sublevel set filtratoin of ) introduced in Edelsbrunner et al. (2002), which we refer to as ordinary persistence in what follows. To capture all the topological features in the level sets (instead of sublevel sets) of all different threshold values, we use an extension of the aforementioned sublevel set persistence, called the level set zigzag persistence (Carlsson et al., 2009). Intuitively, we sweep the domain in increasing function values and now track topological features of the level sets, instead of the sublevel sets. The resulting set of persistence pairings and persistence diagram have analogous meanings: each pair of critical points corresponds to the creation and killing of some homological feature (e.g, connected components if we look at 0th dimensional homological features) in the level set, and the corresponding pair are the birth / death times of this feature.
Sketch of proof for Theorem 2.1. Now for a classifier function , given the level set zigzag persistence diagram and its corresponding set of persitsence pairings w.r.t. , we collect . Intuitively, each corresponds to a 0D homological feature (connected component) that first appeared in a level set with below the zero level set , and it persists through the zero level set and dies only in the level set w.r.t. value . Thus each corresponds to a distinct connected component in the zero level set . Hence intuitively, this set of persistence pairings maps to the set of connected components in the separation boundary bijectively as claimed in Theorem 2.1 ^{3}^{3}3We note that one can argue this more formally by considering the 0thdimensional levelset zigzag persistence module, and its decomposition into the interval modules Carlsson and de Silva (2010). The rank of the 0th homology group of a specific levelset, say , equals to the number of interval modules whose span covers . . To remove this component from the zero level set , we either need to move down from to , or move up from to . The robustness of this component is thus .
In general, the level set zigzag persistence takes time to compute Carlsson et al. (2009), where is the total complexity of the discretized representation of the domain . However, first, we only need the 0th dimensional levelset zigzag persistence. Furthermore, our domain is a hypercube (thus simply connected). Using Theorem 2 of Bendich et al. (2013), and combined with the EP Symmetry Corollary of Carlsson et al. (2009), one can then show the following:
Let and denote the ordinary 0dimensional persistence diagrams w.r.t. the sublevel set filtrations of and of , respectively. Let and denote their corresponding set of persistence pairings. Set and . (For example, in Figure 6(b), points in the red box correspond to . With this correspondence, for each 0D topological feature (connected component) from , let be its pairing of critical points. The corresponding belongs to . Given a persistence pair , we say that the range of this pair covers if . Then by Theorem 2 of Bendich et al. (2013), and combined with the EP Symmetry Corollary of Carlsson et al. (2009), we have that
where consists certain persistence pairs (whose range covers ) from the 0th and 1st dimensional extended subdigrams induced by the extended persistent homology CohenSteiner et al. (2009). However, since is simply connected, is trivial. Hence there is no point in the 1st extended subdigram. As is connected, there is only one point in the 0th extended subdigram, where and are the global minimum and global maximum of the function , respectively. It then follows that
(A.1) 
Hence one can compute by computing the 0th ordinary persistence homology induced by the sublevel set filtration of , and of , respectively. This finishes the proof of Theorem 2.1.
Remark 1.
Finally, we can naturally extend the above definition by considering persistent pairs and the diagram corresponding to the birth and death of high dimensional topological features, e.g., handles, voids, etc.
Comments
There are no comments yet.