## 1 Introduction

Learning informative feature representations of symbolic data, such as text documents or graphs, is a key factor determining the success of downstream pattern recognition tasks. Recently, embedding data into hyperbolic space—a class of non-Euclidean spaces with constant negative curvature—has been receiving increasing attention due to its effectiveness in capturing latent hierarchical structure

AlanisLobato16 ; Chamberlain17 ; DeSa18 ; Krioukov10 ; Nickel17 ; Papadopoulos15 . This capability is likely due to the key property of hyperbolic space that the amount of space grows*exponentially*with the distance from a reference point, in contrast to the slower, polynomial growth in Euclidean space. The geometry of tree-structured data, which similarly expands exponentially with distance from the root, can thus be accurately captured in hyperbolic space, but not in Euclidean space Krioukov10 .

Motivated by this observation, a number of recent studies have focused on developing effective algorithms for embedding data in hyperbolic space in various domains, including natural language processing

DeSa18 ; Nickel17 and network science AlanisLobato16 ; Chamberlain17 ; Papadopoulos15 . Using the hyperbolic embeddings with only a small number of dimensions, these methods were able to achieve superior performance in their respective downstream tasks (e.g., answering semantic queries of words or link prediction in complex networks) compared to their Euclidean counterparts. These results agree with the intuition that better accounting for the inherent geometry of the data can improve downstream predictions.However, current literature is largely limited regarding methods for standard pattern recognition tasks such as classification and clustering for data points that lie in hyperbolic space. Unless the task of interest calls for only rudimentary analysis of the embeddings such as calculating the (hyperbolic) distances or angles between pairs of data points, practitioners are limited to applying algorithms that are designed for data points in Euclidean spaces. For example, when Chamberlain et al. Chamberlain17

set out to classify nodes in a graph after embedding them into hyperbolic space, they resorted to performing logistic regression on the embedding coordinates, which relies on decision boundaries that are linear in the Euclidean sense, but are somewhat arbitrary when viewed in the underlying hyperbolic space.

To enable principled analytic pipelines where the inherent geometry of the data is respected in an *end-to-end* fashion, we generalize linear support vector classifiers, which are one of the most widely-used methods for classification, to data points in hyperbolic space.
Despite the complexities of hyperbolic distance calculation, we prove that support vector classification in hyperbolic space can in fact be performed by solving a simple optimization problem that resembles the Euclidean formulation of SVM, elucidating the close connection between the two.
We experimentally demonstrate the superior performance of hyperbolic SVM over the Euclidean version on two types of simulated datasets (Gaussian point clouds and evolving scale-free networks) as well as real network datasets analyzed by Chamberlain et al. Chamberlain17 .

The rest of the paper is organized as follows. We review hyperbolic geometry and support vector classification in Sections 1 and 2 and introduce our method, hyperbolic SVM, in Section 3. We provide experimental evaluations of hyperbolic SVM in Section 4, and conclude with discussion and future directions in Section 5.

## 2 Review of Hyperbolic Space Models

While hyperbolic space cannot be isometrically embedded in Euclidean space, there are several useful models of hyperbolic geometry formulated as a subset of Euclidean space, each of which provides different insights into the properties of hyperbolic geometry Anderson06 . Our work makes use of three standard models of hyperbolic space—hyperboloid, Poincaré ball, and Poincaré half-space—as briefly described in the following.

Consider an -dimensional real-valued space , equipped with an inner product of the form

(1) |

This is commonly known as the Minkowski space. The -dimensional *hyperboloid model* sits inside as the upper half (one of the two connected components) of a unit “sphere” with respect to the Minkowski inner product:

(2) |

The distance between two points in is defined as the length of the geodesic path on the hyperboloid that connects the two points. It is known that every geodesic curve (i.e., hyperbolic line) in is an intersection between and a 2D plane that goes through the origin in the ambient Euclidean space , and vice versa.

Next, projecting each point of

onto the hyperplane

using the rays emanating from gives the*Poincaré ball model*

(3) |

where the correspondence to the hyperboloid model is given by

(4) |

Here, hyperbolic lines are either straight lines that go through the center of the ball or an inner arc of a Euclidean circle that intersects the boundary of the ball at right angles.

Another useful model of hyperbolic space is the *Poincaré half-space model*

(5) |

which is obtained by taking the inversion of with respect to a circle that has a radius twice that of and is centered at a boundary point of . If we center the inversion circle at , the resulting correspondence between and is given by

(6) |

In this model, hyperbolic lines are straight lines that are perpendicular to the boundary of or Euclidean half-circles that are centered on the boundary of .

Existing methods for hyperbolic space embedding (e.g., AlanisLobato16 ; Chamberlain17 ; Nickel17 ) output the embedding coordinates in their model of choice, but as described above the coordinates of the corresponding points in other models can be easily calculated.

## 3 Review of Support Vector Classification

Let be a set of training data instances, where the feature vector is a point in a metric space with distance function , and denotes the true label for all . Let be any decision rule. The geometric margin of with respect to a single data instance can be defined as

(7) |

Intuitively, the geometric margin measures how far one needs to travel from a given point to obtain a different classification. Note that is a signed quantity; it is positive for points where our prediction is correct (i.e., ) and negative otherwise. Increasing the value of across the training data points is desirable; for correct classifications, we increase our confidence, and for incorrect classifications, we minimize the error.

Maximum margin learning of the optimal decision rule , which provides the foundation for support vector machines, can now be formalized as

(8) |

where is the set of candidate decision rules that we consider.

If we let the data space be and be the Euclidean distance function and consider only linear classifiers, i.e., where

(9) |

then it can be shown that the max-margin problem given in Eq. 8 becomes equivalent to solving the following convex optimization problem:

(10) | ||||

subject to | (11) |

The resulting algorithm that solves this problem (via its dual) is known as support vector machines (SVM). Introducing a relaxation for the separability constraints gives a more commonly used soft-margin variant of SVM

(12) |

where the parameter determines the tradeoff between minimizing misclassification and maximizing the margin. Solving this optimization problem either in its primal form or via its dual has been established as a standard tool for classification in a wide range of domains Fan08 . Note that in our formulation the bias parameter is implicitly handled by appending to the end of each .

## 4 Hyperbolic Support Vector Classification

We newly tackle the problem of solving the max-margin problem in Eq. 8 where the data points lie in hyperbolic space. In particular, we will adopt the hyperboloid model to let and let be the hyperbolic distance function. Note that the data points need not be initially specified using the hyperboloid model, since coordinates in other models of hyperbolic space (e.g., Poincaré ball model) can be easily converted to .

Analogous to the Euclidean SVM, we consider a set of decision functions that lead to linear decision boundaries *in the hyperbolic space*.
It is known that any hyperbolic line (geodesic) in is an intersection between and a 2D Euclidean plane in the ambient space .
Thus, a natural way to define decision hyperplanes in is to use -dimensional hyperplanes in as a proxy.
More precisely, we let

(13) |

where

(14) |

and denotes the Minkowski inner product. This decision function has the decision hyperplane , which is an -dimensional hyperplane in . Thus, the corresponding decision hyperplane in obtained by its intersection with is the hyperbolic space-equivalent of a linear hyperplane, which can also be viewed as a union of geodesic curves. We provide examples of linear decision hyperplanes in a two-dimensional hyperbolic space in Figure 1. For example for , which can be visualized as an upper hyperboloid in 3D, our choice of consists of every geodesic curve on the hyperboloid. Interestingly, our formulation obviates the need for a bias term as every hyperbolic hyperplane of codimension one is covered by our parametrization.

The condition that has negative Minkowski norm squared () is needed to ensure we obtain a non-trivial decision function; otherwise, the decision hyperplane does not intersect with in and thus all points in are classified as the same label.

Our first main result gives a simple closed-form expression for the geometric margin of a given data point to a decision hyperplane in hyperbolic space:

###### Theorem 1.

Given such that and a data point , the minimum hyperbolic distance from to the decision boundary associated with , i.e., , is given by

(15) |

To obtain this result, we reduce the problem of calculating the minimum hyperbolic distance to the decision hyperplane to a Euclidean geometry problem by mapping the decision boundary and the data point onto the Poincaré half-space model, in which the decision boundary is characterized as a Euclidean half-sphere. A full proof of Theorem 1 is provided in Supplementary Information.

Given this formula, one can apply a sequence of transformations to the max-margin classification problem in Eq. 13 for the hyperbolic setting to obtain the following result.

###### Theorem 2.

The proof of Theorem 2 is exactly analogous to the Euclidean version, and is provided in the Supplementary Information.

Our result suggests that despite the apparent complexity of hyperbolic distance calculation, the optimal (linear) maximum margin classifiers in the hyperbolic space can be identified via a relatively simple optimization problem that closely resembles the Euclidean version of SVM, where Euclidean inner products are replaced with Minkowski inner products.

Note that if we restrict to decision functions where , then our formulation coincides with with Euclidean SVM. Thus, Euclidean SVM can be viewed as a special case of our formulation where the first coordinate (corresponds to the time axis in Minkowski spacetime) is neglected.

Unlike Euclidean SVM, however, our optimization problem has a non-convex objective as well as a non-convex constraint. Yet, if we restrict our attention to non-trivial, finite-sized problems where it is necessary and sufficient to consider only the set of for which at least one data point lies on either side of the decision boundary, then the negative norm constraint can be replaced with a convex alternative that intuitively maps out the convex hull of given data points in the ambient Euclidean space of .

Finally, the soft-margin formulation of hyperbolic SVM can be derived by relaxing the separability constraints as in the Euclidean case.
Instead of imposing a linear penalty on misclassification errors, which has an intuitive interpretation as being proportional to the minimum Euclidean distance to the correct classification in the Euclidean case, we impose a penalty proportional to the *hyperbolic* distance to the correct classification.
Analogous to the Euclidean case, we fix the scale of penalty so that the margin of the closest point to the decision boundary (that is correctly classified) is set to .
This leads to the optimization problem

(19) | ||||

subject to | (20) |

In all our experiments in the following section, we consider the simplest approach of solving the above formulation of hyperbolic SVM via projected gradient descent. The initial is determined based on the solution of a soft-margin SVM in the ambient Euclidean space of the hyperboloid model, so that for all . This provides a good initialization for the optimization and has an additional benefit of improving the stability of the algorithm in the presence of potentially many local optima.

## 5 Experimental Results

In the following, we compare hyperbolic SVM to the original Euclidean formulation of SVM (i.e., L2-regularized hinge-loss optimizer) on three different types of datasets—two simulated and one real. For fair comparison, we restricted our attention to Euclidean SVM with a linear kernel, which has the same degrees of freedom as our method. We discuss extending our work to non-linear classifications in hyperbolic space in Section 6.

### 5.1 Evaluation Setting

To enable multi-class classification for datasets with more than two classes, we adopt a one-vs-all (OVA) strategy, where a collection of binary classifiers are independently trained to distinguish each class from the rest. For each method, the resulting prediction scores on the holdout data are transformed into probability outputs via Platt scaling

Platt1999 across all classes and collectively analyzed to quantify the overall classification accuracy. Note that for hyperbolic SVM we use the Minkowski inner product between the learned weight vector and the data point in the hyperboloid model as the prediction score, which is a monotonic transformation of the geometric margin.In both hyperbolic and Euclidean SVMs, the tradeoff between minimizing misclassification and maximizing margin is determined by the parameter (see Eqs. 12 and 19). In all our experiments, we determined the optimal separately for each run via a nested cross-validation procedure.

Our main performance metric is macro-averaged area under the precision recall curve (AUPR), which is obtained by computing the AUPR of predicting each class against the rest separately, then taking the average across all classes. The results based on other performance metrics, such as the area under the ROC curve and the micro-average variants of both metrics, led to similar conclusions across all our experiments.

### 5.2 Classifying Mixture of Gaussians

To evaluate hyperbolic SVM, we first generated a collection of 100 toy datasets by sampling data points from a Gaussian mixture model defined in the Poincaré disk model

. Note that, analogous to the Euclidean setting, the probability density function of an (isotropic) hyperbolic Gaussian distribution decays exponentially with the squared hyperbolic distance from the centroid, inversely scaled by the variance parameter. For each dataset, we sampled four centroids from a zero-mean hyperbolic Gaussian distribution with variance parameter 1.5. Then, we sampled 100 data points from a unit-variance hyperbolic Gaussian distribution centered at each of the four centroids to form a dataset of 400 points assigned to 4 classes.

The results from five independent trials of two-fold cross validation on each of the 100 datasets are summarized in Figure 2a. We observed a strongly significant improvement of hyperbolic SVM over the Euclidean version in terms of prediction accuracy, with a one-sided paired-sample -test -value .

We attribute the performance improvement of hyperbolic SVM to the fact that the class of decision functions it considers (linear hyperplanes in hyperbolic space) better match the underlying data distributions, which follow hyperbolic geometry by design. The learned decision functions for both methods on an example dataset, generated in the same manner as above, are shown in Figures 2b and c. Note that the apparent non-linearity of hyperbolic SVM decision boundaries are due to our use of the Poincaré disk model for visualization; in the hyperbolic space, these decision boundaries are in fact linear.

### 5.3 Node Classification in Evolving Networks

One of the key applications of hyperbolic space embedding is the modeling of complex networks that exhibit scale-free properties AlanisLobato16 ; Papadopoulos15 ; Papadopoulos12 , such as protein-protein interaction networks. Here, we set out to test whether hyperbolic SVM can improve classification performance for the embedding of such networks.

To this end, we generated 10 random scale-free networks using the popularity-vs-similarity (PS) model, which was shown to faithfully reproduce the properties of networks in many real-world applications Papadopoulos12 (Figure 3a). The PS model starts with an empty graph and, at each time point, creates a new node and attaches it to a certain number of existing nodes where the likelihood of an edge depends on the degree of the node (popularity) as well as node-node similarities. We embedded each of the simulated networks into hyperbolic space using LaBNE AlanisLobato16 (Figure 3b), a network embedding method directly based on the PS model.

Inspired by the gene function prediction task in biology Cho16 , we then generated a multi-class, multi-label dataset based on each of the simulated networks as follows. For each new label, we randomly choose a node in the network to be the first “pioneer” node to be annotated with the label. Then, we replay the evolution of the network, and every time a new node is connected to an existing node with the given label, the label was propagated to the new node with a set probability (0.8 in our experiments). This procedure results in a relatively clustered set of nodes within the network being assigned to the same label. Such patterns are prevalent in protein-protein interaction networks, where genes or proteins that belong to the same functional category tend to be proximal and share many connections in the network.

For each target size range for the label, where size refers to the final number of nodes annotated with the label, we created 10 such labels for each network to obtain a multi-label classification dataset with 10 classes. This process was repeated 5 times for each of the 10 networks to generate a total of 150 datasets with varying label sizes (20-50, 50-100, and 100-200). We evaluated the prediction performance of hyperbolic SVM via two-fold cross validation procedure, where we held out the labels of half the nodes in the network and predicted them based on the other half.

Our results are summarized in Figure 3c. Across all label size ranges and networks, hyperbolic SVM matched or outperformed Euclidean SVM. The overall improvement of hyperbolic SVM was statistically significant, with a one-sided paired-sample -test -value of . Notably, our performance improvement was more pronounced for smaller size ranges (20-50 and 50-100).

### 5.4 Node Classification in Real Networks

To demonstrate the performance of hyperbolic SVM on real-world datasets, we tested it on four network datasets used by Chamberlain et al. Chamberlain17 for benchmarking their hyperbolic network embedding algorithm. These datasets include: (1) *karate* Zachary77 : a social network of 34 people divided into two factions, (2) *polbooks*^{2}^{2}2http://www-personal.umich.edu/ mejn/netdata/: co-purchasing patterns of 105 political books around the time of 2004 presidential election divided into 3 affiliations, (3) *football* Girvan02 : football matches among 115 Division IA colleges in Fall 2000 divided into 12 leagues, and (4) *polblogs* Lada05 : a hyperlink network of 1224 political blogs in 2005 divided into two affiliations.
Note that we excluded the *adjnoun* dataset due the near-random performance of all methods we considered.

For each dataset, we embedded the network into a two-dimensional hyperbolic space using Chamberlain et al.’s embedding method based on random walks Chamberlain17 , which closely follows an existing algorithm called DeepWalk Perozzi14 except Euclidean inner products are replaced with a measure of hyperbolic angle. Given the hyperbolic embedding of each network, we performed two-fold cross validation to compare the prediction accuracy of hyperbolic SVM with Euclidean SVM. Note that Chamberlain et al. performed logistic regression on the embedding coordinates, which can be viewed as a variant of Euclidean SVM with a different penalty function on the misclassification margins.

For all four datasets, hyperbolic SVM matched or outperformed the performance achieved by Euclidean SVM. Notably, the datasets where the performance was comparable between the two methods (karate and polblogs) consisted of only two well-separated classes, in which case a linear decision boundary is expected to achieve a reasonably high performance.

In addition, we tested Euclidean SVM based on the Euclidean embeddings obtained by DeepWalk with dimensions 2, 5, 10, and 25. Even with as many as 25 dimensions, Euclidean SVM was not able to achieve competitive prediction accuracy based on the Euclidean embeddings across all datasets. This supports the conclusion that hyperbolic geometry likely underlies these networks and that increasing the number of dimensions for the Euclidean embedding does not necessarily lead to representations that are as informative as the hyperbolic embedding.

Classifier | Embedding | Dataset | |||
---|---|---|---|---|---|

(Dimension) | karate | polbooks | football | polblogs | |

Hyperbolic SVM | Hyperbolic (2) | ||||

Euclidean SVM | Hyperbolic (2) | ||||

Euclidean SVM | Euclidean (2) | ||||

Euclidean SVM | Euclidean (5) | ||||

Euclidean SVM | Euclidean (10) | ||||

Euclidean SVM | Euclidean (25) |

## 6 Discussion and Future Work

We proposed support vector classification in hyperbolic space and demonstrated its effectiveness in classifying points in hyperbolic space on three different types of datasets. Although we focused on decision functions that are linear in hyperbolic space (i.e., based on hyperbolically geodesic decision hyperplanes), our formulation of hyperbolic SVM may potentially allow the development of non-linear classifiers, drawing intuition from kernel methods for SVM. In particular, we are interested in exploring the use of radial basis function kernels in hyperbolic space, which are widely-used in the Euclidean setting. In addition, while our experimental results were based on two-dimensional hyperbolic spaces, our formulation naturally extends to higher dimensional hyperbolic spaces, which may be of interest in future applications.

More broadly, our work belongs to a growing body of literature that aims to develop learning algorithms that directly operate over a Riemannian manifold Porikli10 ; Tuzel08 . Linear hyperplane-based classifiers and clustering algorithms have previously been formulated for spherical spaces Dhillon01 ; Lebanon04 ; Wilson10

. To the best of our knowledge, our work is the first to develop and experimentally demonstrate support vector classification in hyperbolic geometry. We envision further development of hyperbolic space-equivalents of other standard machine learning tools in the near future.

## References

- [1] Gregorio Alanis-Lobato, Pablo Mier, and Miguel A Andrade-Navarro. Efficient embedding of complex networks to hyperbolic space via their Laplacian. Scientific reports, 6:30108, 2016.
- [2] James W Anderson. Hyperbolic geometry. Springer Science & Business Media, 2006.
- [3] B P Chamberlain, J Clough, and M P Deisenroth. Neural Embeddings of Graphs in Hyperbolic Space. CoRR, stat.ML, May 2017.
- [4] Hyunghoon Cho, Bonnie Berger, and Jian Peng. Compact Integration of Multi-Network Topology for Functional Analysis of Genes. Cell Systems, 3(6):540–548.e5, 2016.
- [5] Christopher De Sa, Albert Gu, Christopher Ré, and Frederic Sala. Representation Tradeoffs for Hyperbolic Embeddings. CoRR, abs/1804.03329, 2018.
- [6] Inderjit S Dhillon and Dharmendra S Modha. Concept Decompositions for Large Sparse Text Data Using Clustering. Machine Learning, 42(1):143–175, January 2001.
- [7] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9(Aug):1871–1874, 2008.
- [8] Michelle Girvan and Mark EJ Newman. Community structure in social and biological networks. Proceedings of the national academy of sciences, 99(12):7821–7826, 2002.
- [9] Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Mari ’an Bogu n ’a. Hyperbolic geometry of complex networks. Phys. Rev. E, 82:036106, September 2010.
- [10] AA Lada and Glance Natalie. The political blogosphere and the 2004 us election. In Proceedings of the 3rd international workshop on Link discovery, volume 1, pages 36–43, 2005.
- [11] Guy Lebanon and John Lafferty. Hyperplane Margin Classifiers on the Multinomial Manifold. In Proceedings of the Twenty-first International Conference on Machine Learning, pages 66–66, New York, NY, USA, 2004. ACM.
- [12] Maximillian Nickel and Douwe Kiela. Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems, pages 6341–6350, 2017.
- [13] Fragkiskos Papadopoulos, Rodrigo Aldecoa, and Dmitri Krioukov. Network geometry inference using common neighbors. Physical Review E, 92(2):022807, 2015.
- [14] Fragkiskos Papadopoulos, Maksim Kitsak, M Ángeles Serrano, Marián Boguñá, and Dmitri Krioukov. Popularity versus similarity in growing networks. Nature, 489(7417):537–540, September 2012.
- [15] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. DeepWalk: Online Learning of Social Representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 701–710, New York, NY, USA, 2014. ACM.
- [16] John Platt et al. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3):61–74, 1999.
- [17] Fatih Porikli. Learning on Manifolds. In Edwin R Hancock, Richard C Wilson, Terry Windeatt, Ilkay Ulusoy, and Francisco Escolano, editors, Structural, Syntactic, and Statistical Pattern Recognition, pages 20–39, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
- [18] O Tuzel, F Porikli, and P Meer. Pedestrian Detection via Classification on Riemannian Manifolds. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10):1713–1727, October 2008.
- [19] Richard C Wilson and Edwin R Hancock. Spherical Embedding and Classification. In Edwin R Hancock, Richard C Wilson, Terry Windeatt, Ilkay Ulusoy, and Francisco Escolano, editors, Structural, Syntactic, and Statistical Pattern Recognition, pages 589–599, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
- [20] Wayne W Zachary. An information flow model for conflict and fission in small groups. Journal of anthropological research, 33(4):452–473, 1977.

## Proof of Theorem 1

To derive margin-based classifiers in hyperbolic space, we first derive a closed-form expression for the geometric margin of a given data point to a “linear” decision hyperplane defined by as we described in the previous section.

First, we perform an isometric transformation to simplify calculations. Let

be an orthogonal matrix in

. Then, the matrixrepresents an isometric, orthogonal transformation of the Minkowski space, since it preserves the associated inner product as follows

for any and . If we set the first column of to where denotes the Euclidean norm, then due to the preservation of geodesics under isometry, the margin of interest becomes equivalent to the margin of a transformed point to the decision hyperplane parameterized by

The first two coordinates of are given in terms of the original coordinates as

Given such a transformation exists for any point in , it is sufficient to derive the margin of an arbitrary point with respect to a decision hyperplane represented by a weight vector , where

We use to denote the ratio between the first two coordinates of as

Importantly, the condition that in order for to represent a non-trivial decision function is equivalent to the condition that , which we will assume in our derivation.

The following lemma characterizes the decision hyperplane defined by such :

###### Lemma 1.

The decision hyperplane corresponding to a weight vector where is equivalently represented in the Poincaré half-space model as a Euclidean hypersphere centered at the origin with radius , where .

###### Proof.

It suffices to show for any ,

where maps points in the hyperboloid model to the corresponding points in the half-space model and denotes the Euclidean norm.

Let , , and be the points in the half-space model, the ball model, and the hyperboloid model, respectively, that represent the same point in the hyperbolic space. Since

we have

Note that

which gives us

Next, recall

which leads to

where we used the fact that since .

We can now express in terms of as

Because the function is bijective,

Finally, note that

where we used the fact that . ∎

It is a known fact that, in a two-dimensional hyperbolic space, the set of points that are equidistant to a hyperbolic line on the same side of the line forms what is called a *hypercycle*, which takes the shape of a Euclidean circle in the Poincaré half-plane model that goes through the same two *ideal points* as the reference line. Note that the ideal points refer to the two end points of a hyperbolic line in the half-plane model (a circular arc representing a geodesic curve) where the hyperbolic line meets the boundary of the half-plane.

In a high-dimensional setting, an analogous property is that the set of points equidistant to a hyperbolic hyperplane takes the shape of a Euclidean hypersphere that intersects the boundary of the Poincaré half-space at the same ideal points as the hyperplane.

Because our decision hyperplane as characterized in Lemma 1 has its center at the origin of the half-space model, any hypersphere that intersect the boundary of the half-space at the ideal points of the decision hyperplane must have a center for some . In other words, any hypersphere representing a hypercycle with respect to our given decision boundary is centered on the first coordinate axis (which is perpendicular to the boundary of the half-space).

Let be the point in the half-space model that corresponds to the transformed data point we described earlier. One way to reason about the margin of with respect to the decision hyperplane defined by is to find which hypercycle belongs to. We can do so using the fact that, in addition to , an ideal point of the decision hyperplane lies on the hypercycle, where

from Lemma 1. More precisely, we can solve for the center of the hypercycle parameterized by using the equation

which states that and are equidistant from the center of the hypercycle . This gives us

Now, using the relations

which can be derived based on the mapping between the hyperboloid and the half-space models, we obtain that

Next, we find a point that lies on the first coordinate axis and has the same margin with respect to the decision hyperplane as the given data point .

Since we know that the radius of this hypercycle is given by (i.e., the distance from to ), we get

Finally, using the hyperbolic distance formula in the half-space model, we obtain that the hyperbolic distance between and , which is equal to the unsigned geometric margin of our data point, is given by the log-ratio

Using the expression for we previously obtained, note that

Since

and similarly,

we can alternatively express the margin as

using the fact that .

Finally, since we have shown earlier that our initial transformation of and to and preserves the Minkowski inner product, we obtain the geometric margin in terms of the original variables as

## Proof of Theorem 2

Note that if we make the rescaling , the distance of any point to the decision surface is unchanged. Indeed, using Theorem 1, and properties of inner products,

so scaling by does not matter. Using this freedom, we can assume that , where is the point with minimum hyperbolic distance to the decision plane specified by , and is the corresponding decision. This is the analogue of the “canonical representation" of decision hyperplanes familiar from Euclidean SVMs. When is thus scaled, we have

for all .

Therefore, the optimization problem now simply maximizes

which is equivalent to minimizing subject to the above constraint. We add the factor of to simplify gradient calculations. The additional constraint ensures that the decision function specified by is nontrivial (see main text).

Comments

There are no comments yet.