Log In Sign Up

Provably Accurate and Scalable Linear Classifiers in Hyperbolic Spaces

Many high-dimensional practical data sets have hierarchical structures induced by graphs or time series. Such data sets are hard to process in Euclidean spaces and one often seeks low-dimensional embeddings in other space forms to perform the required learning tasks. For hierarchical data, the space of choice is a hyperbolic space because it guarantees low-distortion embeddings for tree-like structures. The geometry of hyperbolic spaces has properties not encountered in Euclidean spaces that pose challenges when trying to rigorously analyze algorithmic solutions. We propose a unified framework for learning scalable and simple hyperbolic linear classifiers with provable performance guarantees. The gist of our approach is to focus on Poincaré ball models and formulate the classification problems using tangent space formalisms. Our results include a new hyperbolic perceptron algorithm as well as an efficient and highly accurate convex optimization setup for hyperbolic support vector machine classifiers. Furthermore, we adapt our approach to accommodate second-order perceptrons, where data is preprocessed based on second-order information (correlation) to accelerate convergence, and strategic perceptrons, where potentially manipulated data arrives in an online manner and decisions are made sequentially. The excellent performance of the Poincaré second-order and strategic perceptrons shows that the proposed framework can be extended to general machine learning problems in hyperbolic spaces. Our experimental results, pertaining to synthetic, single-cell RNA-seq expression measurements, CIFAR10, Fashion-MNIST and mini-ImageNet, establish that all algorithms provably converge and have complexity comparable to those of their Euclidean counterparts. Accompanying codes can be found at:


page 3

page 4

page 5

page 6

page 8

page 9

page 12

page 16


Highly Scalable and Provably Accurate Classification in Poincare Balls

Many high-dimensional and large-volume data sets of practical relevance ...

Linear Classifiers in Mixed Constant Curvature Spaces

Embedding methods for mixed-curvature spaces are powerful techniques for...

Hyperbolic Entailment Cones for Learning Hierarchical Embeddings

Learning graph representations via low-dimensional embeddings that prese...

Robust Large-Margin Learning in Hyperbolic Space

Recently, there has been a surge of interest in representation learning ...

Hyperbolic Busemann Learning with Ideal Prototypes

Hyperbolic space has become a popular choice of manifold for representat...

Aligning Hyperbolic Representations: an Optimal Transport-based approach

Hyperbolic-spaces are better suited to represent data with underlying hi...

Probing BERT in Hyperbolic Spaces

Recently, a variety of probing tasks are proposed to discover linguistic...

I Introduction

Representation learning in hyperbolic spaces has received significant interest due to its effectiveness in capturing latent hierarchical structures [15, 30, 29, 23, 25, 35]. It is known that arbitrarily low-distortion embeddings of tree-structures in Euclidean spaces is impossible even when using an unbounded number of dimensions [18]. In contrast, precise and simple embeddings are possible in the Poincaré disk, a hyperbolic space model with only two dimensions [30].

Despite their representational power, hyperbolic spaces are still lacking foundational analytical results and algorithmic solutions for a wide variety of downstream machine learning tasks. In particular, the question of designing highly-scalable classification algorithms with provable performance guarantees that exploit the structure of hyperbolic spaces remains open. While a few prior works have proposed specific algorithms for learning classifiers in hyperbolic space, they are primarily empirical in nature and do not come with theoretical convergence guarantees [8, 21]. The work [38] described the first attempt to establish performance guarantees for the hyperboloid perceptron, but the proposed algorithm is not transparent and fails to converge in practice. Furthermore, the methodology used does not naturally generalize to other important classification methods such as support vector machines (SVMs) [9]. Hence, a natural question arises: Is there a unified framework that allows one to generalize classification algorithms for Euclidean spaces to hyperbolic spaces, make them highly scalable and rigorously establish their performance guarantees?

We give an affirmative answer to this question for a wide variety of classification algorithms. By redefining the notion of separation hyperplanes in hyperbolic spaces, we describe the first known Poincaré ball perceptron and SVM methods with provable performance guarantees. Our perceptron algorithm resolves convergence problems associated with the perceptron method in 


. It is of importance as they represent a form of online learning in hyperbolic spaces and are basic components of hyperbolic neural networks. On the other hand, our Poincaré SVM method successfully addresses issues associated with solving and analyzing a nontrivial nonconvex optimization problem used to formulate hyperboloid SVMs in 

[8]. In the latter case, a global optimum may not be attainable using projected gradient descent methods and consequently this SVM method does not provide tangible guarantees. Our proposed algorithms may be viewed as “shallow” one-layer neural networks for hyperbolic spaces that are not only scalable but also (unlike deep networks [10, 32]) exhibit an extremely small storage-footprint. They are also of significant relevance for few-shot meta-learning [17] and for applications such as single-cell subtyping and image data processing as described in our experimental analysis (see Figure 1 for the Poincaré embedding of these data sets).

Fig. 1: Visualization of four embedded data sets: Olsson’s single-cell RNA expression data (top left, ), CIFAR10 (top right, ), Fashion-MNIST (bottom left, ) and mini-ImageNet (bottom right, ). Here stands for the number of classes and stands for the dimension of embedded Poincaré ball. Data points from mini-ImageNet are mapped into dimensions using tSNE for viewing purposes only and thus may not lie in the unit Poincaré disk. Different colors represent different classes.

For our algorithmic solutions we choose to work with the Poincaré ball model for several practical and mathematical reasons. First, this model lends itself to ease of data visualization and it is known to be conformal. Furthermore, many recent deep learning models are designed to operate on the Poincaré ball model, and our detailed analysis and experimental evaluation of the perceptron, SVM and related algorithms can improve our understanding of these learning methods. The key insight is that

tangent spaces

of points in the Poincaré ball model are Euclidean. This, along with the fact that logarithmic and exponential maps are readily available to switch between different relevant spaces simplifies otherwise complicated derivations and allows for addressing classification tasks in a unified manner using convex programs. To estimate the reference points for tangent spaces we make use of convex hull algorithms over the Poincaré ball model and explain how to select the free parameters of the classifiers. Further contributions include generalizations of the work 

[6] on second-order perceptrons and the work [1] on strategic perceptrons in Euclidean spaces.

(a) Accuracy vs
(b) Time vs
(c) Accuracy vs
(d) Time vs
Fig. 2: Classification of points in

dimensions selected uniformly at random in the Poincaré ball. The upper and lower boundaries of the shaded region represent the first and third quantile, respectively. The line shows the medium (second quantile) and the marker

indicates the mean. Detailed explanations pertaining to the test results can be found in Section VI.

The proposed perceptron and SVM methods easily operate on massive synthetic data sets comprising millions of points and up to one thousand dimensions. All perceptron algorithms converge to an error-free result provided that the data satisfies an -margin assumption. The second-order Poincaré perceptron converges using significantly fewer iterations than the perceptron method, which matches the advantages offered by its Euclidean counterpart [6]. This is of particular interest in online learning settings and it also results in lower excess risk [5]. Our Poincaré SVM formulation, which unlike [8] comes with provable performance guarantees, also operates significantly faster than its nonconvex counterpart ( minute versus hours on a set of points, which is faster) and also offers improved classification accuracy as high as %. Real-world data experiments involve single-cell RNA expression measurements [24], CIFAR10 [16], Fashion-MNIST [39] and mini-ImageNet [28]. These data sets have challenging overlapping-class structures that are hard to process in Euclidean spaces, while Poincaré SVMs still offer outstanding classification accuracy with gains as high as compared to their Euclidean counterparts.

This paper is organized as follows. Section II describes an initial set of experimental results illustrating the scalability and high-quality performance of our Poincaré SVM method compared to the corresponding Euclidean and other hyperbolic classifiers. This section also contains a discussion of prior works on hyperbolic perceptrons and SVMs that do not use the tangent space formalism and hence fail to converge and/or provide provable convergence guarantees, as well as a review of variants of perceptron algorithms. Section III describes relevant concepts from differential geometry needed for the analysis at hand. Section IV contains our main results, analytical convergence guarantees for the proposed Poincaré perceptron and SVM learners. Section V includes examples pertaining to generalizations of two variants of perceptron algorithms in Euclidean spaces. A more detailed set of experimental results, pertaining to real-world single-cell RNA expression measurements for cell-typing and three collections of image data sets is presented in Section VI. These results illustrate the expression power of hyperbolic spaces for hierarchical data and highlight the unique feature and performance of our techniques.

Ii Relevance and Related Work

To motivate the need for new classification methods in hyperbolic spaces we start by presenting illustrative numerical results for synthetic data sets. We compare the performance of our Poincaré SVM with the previously proposed hyperboloid SVM [8] and Euclidean SVM. The hyperbolic perceptron outlined in [38] does not converge and is hence not used in our comparative study. Rigorous descriptions of all mathematical concepts and pertinent proofs are postponed to the next sections of this paper.

One can clearly observe from Figure 2 that the accuracy of Euclidean SVMs may be significantly below , as the data points are not linearly separable in Euclidean but rather only in the hyperbolic space. Furthermore, the nonconvex SVM method of [8] does not scale well as the number of points increases: It takes roughly hours to complete the classification process on points while our Poincaré SVM takes only minute. Furthermore, the algorithm breaks down when the data dimension increases to due to its intrinsic non-stability. Only our Poincaré SVM can achieve nearly optimal () classification accuracy with extremely low time complexity for all data sets considered. More extensive experimental results on synthetic and real-world data can be found in Section VI.

The exposition in our subsequent sections explains what makes our classifiers as fast and accurate as demonstrated, especially when compared to the handful of other existing hyperbolic space methods. In the first line of work to address SVMs in hyperbolic spaces [8] the authors chose to work with the hyperboloid model of hyperbolic spaces which resulted in a nonconvex optimization problem formulation. The nonconvex problem was solved via projected gradient descent which is known to be able to provably find only a local optimum. In contrast, as we will show, our Poincaré SVM provably converges to a global optimum. The second related line of work [38] studied hyperbolic perceptrons and a hyperbolic version of robust large-margin classifiers for which a performance analysis was included. This work also solely focused on the hyperboloid model and the hyperbolic perceptron method outlined therein does not converge. The main difference between this work and previous works is that we resort to a straightforward, universal and simple proof techniques that “transfers” the classification problem from a Poincaré ball back to a Euclidean space through the use of tangent space formalisms. Our analytical convergence results are extensively validated experimentally, as described above and in Section VI.

There exists many variants of perceptron-type algorithms in the literature. One variant uses second-order information (correlation) in samples to accelerate the convergence of standard perceptron algorithms [6]. The method, termed the second-order perceptron, makes use of the data correlation matrix to ensure fast convergence to the optimal perceptron classifier. Another line of work is related to strategic classification [1]. Strategic classification deals with the problem of learning a classier when the learner relies on data that is provided by strategic agents in an online manner [3, 12]. This problem is of great importance in decision theory and fair learning. It was shown in [1] that standard perceptrons can oscillate and fail to converge when the data is manipulated, and the strategic perceptron is proposed to mitigate this problem. We successfully extend our framework to these two settings and describe Poincaré second-order and strategic perceptrons based on their Euclidean counterparts.

In addition to perceptrons and SVM described above, a number of hyperbolic neural networks solutions have been put forward as well [10, 32]. These networks were built upon the idea of Poincaré hyperplanes and motivated our approach for designing Poincaré-type perceptrons and SVMs. One should also point out that there are several other deep learning methods specifically designed for the Poincaré ball model, including hyperbolic graph neural networks [19]

and Variational Autoencoders 

[22, 20, 33]. Despite the excellent empirical performance of these methods theoretical guarantees are still unavailable due to the complex formalism of deep learners. Our algorithms and proof techniques illustrate for the first time why elementary components of such networks, such as perceptrons, perform exceptionally well when properly formulated for a Poincaré ball.

Iii Review of Hyperbolic Spaces

We start with a review of basic notions pertinent to hyperbolic spaces. We then proceed to introduce the notion of separation hyperplanes in the Poincaré ball model of hyperbolic space which is crucial for all our subsequent derivations. The relevant notation used is summarized in Table I.

Notation Definition
Dimension of Poincaré ball
Absolute value of the negative curvature,
Poincaré ball model,
TABLE I: Notation and Definitions.

The Poincaré ball model. Despite the existence of a multitude of equivalent models for hyperbolic spaces, Poincaré ball models have received the broadest attention in the machine learning and data mining communities. This is due to the fact that the Poincaré ball model provides conformal representations of shapes and point sets, i.e., in other words, it preserves Euclidean angles of shapes. The model has also been successfully used for designing hyperbolic neural networks [10, 32]

with excellent heuristic performance. Nevertheless, the field of learning in hyperbolic spaces – under the Poincaré or other models – still remains largely unexplored.

The Poincaré ball model is a Riemannian manifold. For the absolute value of the curvature , its domain is the open ball of radius :

here and elsewhere stands for the norm and stands for the standard inner product. The Riemannian metric of Poincaré model is defined as

For , we recover the Euclidean space, i.e., . For simplicity, we focus on the case in this paper albeit our results can be generalized to hold for arbitrary . Furthermore, for a reference point , we denote its tangent space, the first order linear approximation of around , by .

In the following, we introduce Möbius addition and scalar multiplication — two basic operators in the Poincaré ball [36]. These operators represent analogues of vector addition and scalar-vector multiplication in Euclidean spaces. The Möbius addition of is defined as


Unlike its vector-space counterpart, this addition is noncommutative and nonassociative. The Möbius version of multiplication of by a scalar is defined according to


For detailed properties of these operations, see [37, 10]. The distance function in the Poincaré model is


Using Möbius operations one can also describe geodesics (analogues of straight lines in Euclidean spaces) in . The geodesics connecting two points is given by


Note that and and .

The following result explains how to construct a geodesic with a given starting point and a tangent vector.

Lemma III.1 (Lemma 1 in [10])

For any and s.t. , the geodesic starting at with tangent vector equals:


We complete the overview by introducing logarithmic and exponential maps.

Lemma III.2 (Lemma 2 in [10])

For any point the exponential map and the logarithmic map are given for and by:


The geometric interpretation of is that it gives the tangent vector of for starting point . On the other hand, returns the destination point if one starts at the point with tangent vector . Hence, a geodesic from to may be written as


See Figure 3 for the visual illustration.

(a) Poincaré disk
(b) Tangent space
Fig. 3: Figure (a): A linear classifier in Poincaré disk . Figure (b): Corresponding tangent space .

Iv Classification in hyperbolic spaces

Iv-a Classification Algorithms for Poincaré Balls

Classification with Poincaré hyperplanes. The recent work [10] introduced the notion of a Poincaré hyperplane which generalizes the concept of a hyperplane in Euclidean space. The Poincaré hyperplane with reference point and normal vector in the above context is defined as


where . The minimum distance of a point to has the following close form


We find it useful to restate (10) so that it only depends on vectors in the tangent space as follows.

Lemma IV.1

Let (and thus ), then we have


Equipped with the above definitions, we now focus on binary classification in Poincaré models. To this end, let be a set of data points, where and represent the true labels. Note that based on Lemma IV.1, the decision function based on is . This is due to the fact that does not change the sign of its input and that all other terms in (11) are positive if and . For the case that either or is , and thus the sign remains unchanged. For linear classification, the goal is to learn that correctly classifies all points. For large margin classification, we further required that the learnt achieves the largest possible margin,


In what follows we outline the key idea behind our approach to classification and analysis of the underlying algorithms. We start with the perceptron classifier, which is the simplest approach yet of relevance for online settings. We then proceed to describe our SVM method which offers excellent performance with provable guarantees.

Our approach builds upon the result of Lemma IV.1. For each , let . We assign a corresponding weight as


Without loss of generality, we also assume that the optimal normal vector has unit norm . Then (11) can be rewritten as


Note that and if and only if , which corresponds to the case . Nevertheless, this “border” case can be easily eliminated under a margin assumption. Hence, the problem of finding an optimal classifier becomes similar to the Euclidean case if one focuses on the tangent space of the Poincaré ball model (see Figure 3 for an illustration).

Iv-B The Poincaré Perceptron

We first restate the standard assumptions needed for the analysis of the perceptron algorithm in Euclidean space for the Poincaré model.

Assumption IV.1

The first assumption (15) postulates the existence of a classifier that correctly classifies every points. The margin assumption is listed in (16), while (17) ensures that points lie in a bounded region.

Using (14) we can easily design the Poincaré perceptron update rule. If the mistake happens at instance (i.e., ), then


The complete Poincaré perceptron is then shown in Algorithm LABEL:alg:PP.


Algorithm LABEL:alg:PP comes with the following convergence guarantees.

Theorem IV.1

Under Assumption IV.1, the Poincaré perceptron Algorithm LABEL:alg:PP will correctly classify all points with at most updates, where .

Proof. To prove Theorem IV.1, we need the technical lemma below.

Lemma IV.2

Let . Then


If we replace with ordinary vector addition, we can basically interpret the result as follows: The norm of is maximized when has the same direction as . This can be easily proved by invoking the Cauchy-Schwartz inequality. However, it is nontrivial to show the result under Möbius addition. The proof of Lemma IV.2 is shown in Appendix Proof of Lemma IV.2.

As already mentioned in Section IV-A, the key idea is to work in the tangent space , in which case the Poincaré perceptron becomes similar to the Euclidean perceptron. First, we establish the boundedness of the tangent vectors in by invoking the definition of from Section III:


where (a) can be shown to hold true by involving Lemma IV.2 and performing some algebraic manipulations. Details are as follows.

By Lemma IV.2
By (1)

Combining this with the fact that is non-decreasing in , we arrive at the inequality (a).

The remainder of the analysis is similar to that of the standard Euclidean perceptron. We first lower bound as


where (b) follows from the Cauchy-Schwartz inequality and (c) is a consequence of the margin assumption (16). Next, we upper bound as


where (d) is a consequence of (IV-B) and the fact that the function is nondecreasing for . Note that since and

Combining (IV-B) and (IV-B) we have

which completes the proof.

Iv-C Discussion

It is worth pointing out that the authors of [38] also designed and analyzed a different version of a hyperbolic perceptron in the hyperboloid model , where denotes the Minkowski inner product (A detailed description of the hyperboloid model can be found in Appendix Hyperboloid perceptron). Their proposed update rule is


where (25) is a “normalization” step. Although a convergence result was claimed in [38], we demonstrate by simple counterexamples that their hyperbolic perceptron do not converge, mainly due to the choice of the update direction. This can be easily seen by using the proposed update rule with and with label , where are standard basis vectors (the particular choice leads to an ill-defined normalization). Other counterexamples involve normalization with complex numbers for arbitrary which is not acceptable.

It appears that the algorithm [38] does not converge for most of the data sets tested. The results on synthetic data sets are shown in Figure 4. For this test, data points satisfying a margin assumption are generated in a hyperboloid model and then further converted into a Poincaré ball model for use with Algorithm LABEL:alg:PP. The two accuracy plots shown in Figure 4 (red and green) represent the best achievable results for their corresponding algorithms within the theoretical upper bound on the number of updates of the weight vector. The experiment was performed for different values of . From the generated results, one can easily conclude that our algorithm always converge within the theoretical upper bound provided in Theorem IV.1, while the other algorithm is unstable.

Fig. 4: A comparison between the classification accuracy of our Poincaré perceptron Algorithm LABEL:alg:PP and the algorithm in [38], for different values of the margin . The classification accuracy is the average value over five independent random trials. The stopping criterion is to either achieve a classification accuracy or reach the corresponding theoretical upper bound of updates on the weight vector.

Iv-D Learning Reference Points

Fig. 5: The effect of changing the reference point via parallel transport in corresponding tangent spaces and , for . Note that the images of data points, , change in the tangent spaces with the choice of the reference point.

So far we have tacitly assumes that the reference point is known in advance. While the reference point and normal vector can be learned in a simple manner in Euclidean spaces, this is not the case for the Poincaré ball model due to the non-linearity of its logarithmic map and Möbius addition, as illustrated in Figure 5.

Fig. 6: Learning a reference point . Step 1 (left): Construct convex hull for each cluster. Black lines are geodesics defining the surface of convex hull. Step 2 (right): Find a minimum distance pair and choose as their midpoint.

Importantly, we have the following observation: A hyperplane correctly classifies all points if and only if it separates their convex hulls (the definition of “convexity” in hyperbolic spaces follows from replacing lines with geodesics [27]). Hence we can easily generalize known convex hull algorithms to the Poincaré ball model, including the Graham scan [11] (Algorithm LABEL:alg:Graham_scan) and Quickhull [2] (see the Appendix Convex hull algorithms in Poincaré ball model). Note that in a two-dimensional space (i.e., ) the described convex hull algorithm has complexity and is hence very efficient. Next, denote the set of points on the convex hull of the class labeled by () as (). A minimum distance pair can be found as


Then, the hyperbolic midpoint of corresponds to the reference point (see Figure 6). Our strategy of learning works well on real world data set, see Section VI.

It is important to point out that the computational cost of learning a reference point scales with the dimension of the ambient space, which is not desirable. To resolve this issue we propose a different type of hyperbolic perceptron that operates in the hyperboloid model [34] and discuss it in Appendix Hyperboloid perceptron for completeness.


Iv-E The Poincaré SVM

We conclude our theoretical analysis by describing how to formulate and solve SVMs in the Poincaré ball model with performance guarantees. For simplicity, we only consider binary classification. Techniques for dealing with multiple classes are given in Section VI.

When data points from two different classes are linearly separable the goal of SVM is to find a “max-margin hyperplane” that correctly classifies all data points and has the maximum distance from the nearest point. This is equivalent to selecting two parallel hyperplanes with maximum distance that can separate two classes. Following this approach and assuming that the data points are normalized, we can choose these two hyperplanes as and with . Points lying on these two hyperplanes are referred to as support vectors following the convention for Euclidean SVM. They are critical for the process of selecting .

Let be such that . Therefore, by Cauchy-Schwartz inequality the support vectors satisfy


Combing the above result with (14) leads to a lower bound on the distance of a data point to the hyperplane


where equality is achieved for . Thus we can obtain a max-margin classifier in the Poincaré ball that can correctly classify all data points by maximizing the lower bound in (28). Through a sequence of simple reformulations, the optimization problem of maximizing the lower bound (28) can be cast as an easily-solvable convex problem described in Theorem IV.2.

Theorem IV.2

Maximizing the margin (28) is equivalent to solving the convex problem of either primal (P) or dual (D) form:


which is guaranteed to achieve a global optimum with linear convergence rate by stochastic gradient descent.

The Poincaré SVM formulation from Theorem IV.2 is inherently different from the only other known hyperbolic SVM approach [8]. There, the problem is nonconvex and thus does not offer convergence guarantees to a global optimum when using projected gradient descent. In contrast, since both (P) and (D) are smooth and strongly convex, variants of stochastic gradient descent is guaranteed to reach a global optimum with a linear convergence rate. This makes the Poincaré SVM numerically more stable and scalable to millions of data points. Another advantage of our formulation is that a solution to (D) directly produces the support vectors, i.e., the data points with corresponding that are critical for the classification problem.

When two data classes are not linearly separable (i.e., the problem is soft- rather than hard-margin), the goal of the SVM method is to maximize the margin while controlling the number of misclassified data points. Below we define a soft-margin Poincaré SVM that trades-off the margin and classification accuracy.

Theorem IV.3

Solving soft-margin Poincaré SVM is equivalent to solving the convex problem of either primal (P) or dual (D) form:


which is guaranteed to achieve a global optimum with sublinear convergence rate by stochastic gradient descent.

The algorithmic procedure behind the soft-margin Poincaré SVM is depicted in Algorithm LABEL:alg:soft-margin-svm.


V Perceptron variants in hyperbolic spaces

V-a The Poincaré Second-Order Perceptron

The reason behind our interest in the second-order perceptron is that it leads to fewer mistakes during training compared to the classical perceptron. It has been shown in [5] that the error bounds have corresponding statistical risk bounds in online learning settings, which strongly motivates the use of second-order perceptrons for online classification. The performance improvement of the modified perceptron comes from accounting for second-order data information, as the standard perceptron is essentially a gradient descent (first-order) method.

Equipped with the key idea of our unified analysis, we compute the scaled tangent vectors . Following the same idea, we can extend the second order perceptron to Poincaré ball model. Our Poincaré second-order perceptron is described in Algorithm LABEL:alg:SOHP which has the following theoretical guarantee.


Theorem V.1

For all sequences with assumption IV.1, the total number of mistakes for Poincaré second order perceptron satisfies


where and

are the eigenvalues of


The bound in Theorem V.1 has a form that is almost identical to that of its Euclidean counterpart [6]. However, it is important to observe that the geometry of the Poincaré ball model plays a important role when evaluating the eigenvalues and . Another important observation is that our tangent-space analysis is not restricted to first-order analysis of perceptrons.

V-B The Poincaré Strategic Perceptron

Strategic classification deals with the problem of learning a classifier when the learner relies on data that is provided by strategic agents in an online manner, meaning that the observed data can be manipulated in a controlled manner, based on the utilities of agents. This is a challenging yet important problem in practice because the learner can only observe potentially modified data; however, it has been shown in [1] that standard perceptron algorithms in this setting can fail to converge and may make an unbounded number of mistakes in Euclidean spaces, even when a perfect classifier exists. The authors of [1] thus proposes a modified version of Euclidean space perceptrons to deal with this problem. Following the same idea, we can extend this strategic perceptron to Poincaré ball model and establish performance guarantees.

Before introducing our algorithm, we introduce some additional notation and discuss relevant modeling assumptions. We again consider a binary classification problem, but this time in a strategic setting. In this case, true (unmanipulated) features from different agents arrive in order, with the corresponding binary labels in . The assumption is that all agents want to be classified as positive regardless of their true labels; to achieve this goal, they can choose to manipulate their data to change the classifier, and the decision if to manipulate or not is made based on the gain and the cost of manipulation. The classifier can only receive observed data points , which are either manipulated or not. The first assumption in this problem is that all agents are assumed to be utility maximizers, where utility is defined as the gain minus cost. More precisely, we assume for simplicity that all agents have a gain equal to when being classified as positive, and otherwise. To be more specific, an agent with true data will modify their data to where val if is classified as “positive” by the current classifier and val otherwise; here, cost refers to the cost of changing (manipulating) to . Second, the cost we consider is proportional to the magnitude of movement between and in the Poincaré ball model; i.e., cost, where is the cost per unit of movement and is the largest amount of movement that a rational agent would take. For simplicity, we also assume that is known in advance, otherwise we can estimate from the data.

One simple example illustrating why Algorithm LABEL:alg:PP can fail to converge in a strategic classification setting is as follows.

Consider from and let , where the points are classified as negative, and as positive and . Suppose is the first data point to arrive, following and . Upon observing , is set to based on (18). Observing will not change the classifier since it is correctly classified as positive. The next input, , can be manipulated from to to confuse the current classifier to misclassify it as positive. In this case the manipulation cost is which is within the budget. After the learner receives the true label of , it performs the update . However, when is reexamined next, it will be classified as negative and no manipulation will be possible because the minimum cost for to be classified as positive is . Hence, the learner will perform the update . This causes an infinite loop of updates and the upper bound for the updates on the weight vector in Theorem IV.1 does not hold. Although in this case a perfect classifier with exists, Algorithm LABEL:alg:PP cannot successfully identify it.

For this reason, we propose the following Poincaré strategic perceptron described in Algorithm LABEL:alg:PSP, based on the idea described in [1].


Theorem V.2

If Assumption IV.1 holds for unmanipulated data points , then Algorithm LABEL:alg:PSP makes at most mistakes in the strategic setting for a given cost parameter . Here, .

The proof of Theorem V.2 is delegated to Appendix Proof of Theorem V.2, but two observations are in place to explain the intuition behind Algorithm LABEL:alg:PSP:

  • The updated decision hyperplane at each step will take the form and is a manipulated data point only if . This is due to the fact that all agents are utility maximizers, so they will only move in the direction of if needed. More specifically,

  • No observed data point will fall in the region . This can be shown by contradiction. A lying in this region must be either manipulated or not. If is manipulated, this implies that the agent is not rational as the point is still classified as negative after manipulation; if is not manipulated, this also implies that the agent is not rational as the cost of modifying the data to be classified as positive is less than , which is within the budget. Since can either be manipulated or not, this shows that no observed data point satisfies .

For the cases where the manipulation budget is unknown, one can follow the same estimation procedure presented in [1] which is independent of the curvature of the hyperbolic space.

Vi Experiments

To put the performance of our proposed algorithms in the context of existing works on hyperbolic classification, we perform extensive numerical experiments on both synthetic and real-world data sets. In particular, we compare our Poincaré perceptron, second-order perceptron and SVM method with the hyperboloid SVM of [8] and the Euclidean SVM. Detailed descriptions of the experimental settings are provided in the Appendix.

Fig. 7: (left) Decision boundaries for different choices of . (right) Geometry of different choices of margin .

Vi-a Synthetic Data Sets

(a) Accuracy vs ,
(b) Accuracy vs ,
(c) Time vs ,
(d) Time vs ,
(e) Accuracy vs ,
(f) Accuracy vs ,
(g) Time vs ,
(h) Time vs ,
(i) Accuracy vs Margin ,
(j) Accuracy vs Margin ,
(k) Time vs Margin ,
(l) Time vs Margin ,
Fig. 8: Experiments on synthetic data sets and . The upper and lower boundaries of the shaded region represent the first and third quantile, respectively. The line itself corresponds to the medium (second quantile) and the marker indicates the mean. The first two columns plot the accuracy of the SVM methods while the last two columns plot the corresponding time complexity. For the first row we vary the dimension from to . For the second row we vary the number of points from to . In the third row we vary the margin from to . The default setting for is .

In the first set of experiments, we generate points uniformly at random on the Poincaré disk and perform binary classification task. To satisfy our Assumption IV.1, we restrict the points to have norm at most (boundedness condition). For a decision boundary , we remove all points within margin (margin assumption). Note that the decision boundary looks more “curved” when is larger, which makes it more different from the Euclidean case (Figure 7). When then the optimal decision boundary is also linear in Euclidean sense. On the other hand, if we choose too large then it is likely that all points are assigned with the same label. Hence, we consider the case and and let the direction of to be generated uniformly at random. Results for case are demonstrated in Figure 2 while the others are in Figure 8. All results are averaged over independent runs.

We first vary from to and fix . The accuracy and time complexity are shown in Figure 8 (e)-(h). One can clearly observe that the Euclidean SVM fails to achieve a accuracy as data points are not linearly separable in the Euclidean, but only in hyperbolic sense. This phenomenon becomes even more obvious when increases due to the geometry of Poincaré disk (Figure 7). On the other hand, the hyperboloid SVM is not scalable to accommodate such a large number of points. As an example, it takes hours ( hours for the case , Figure 2) to process points; in comparison, our Poincaré SVM only takes minute. Hence, only the Poincaré SVM is highly scalable and offers the highest accuracy achievable.

Next we vary the margin from to and fix . The accuracy and time complexity are shown in Figure 8 (i)-(l). As the margin reduces, the accuracy of the Euclidean SVM deteriorates. This is again due to the geometry of Poincaré disk (Figure 7) and the fact that the classifier needs to be tailor-made for hyperbolic spaces. Interestingly, the hyperboloid SVM performs poorly for and a margin value , as its accuracy is significantly below . This may be attributed to the fact that the cluster sizes are highly unbalanced, which causes numerical issue with the underlying optimization process. Once again, the Poincaré SVM outperforms all other methods in terms of accuracy and time complexity.

Finally, we examined the influence of the data point dimension on the performance of the classifiers. To this end, we varied the dimension from to and fixed . The accuracy and time complexity results are shown in Figure 8 (a)-(d). Surprisingly, the hyperboloid SVM fails to learn well when is large and close to . This again reaffirms the importance of the convex formulation of our Poincaré SVM, which is guaranteed to converge to a global optimum independent of the choice and . We also find that Euclidean SVM improves its performance as increases, albeit at the price of high execution time.

We now we turn our attention to the evaluation of our perceptron algorithms which are online algorithms in nature. Results are summarized in Table II. There, one can observe that the Poincaré second-order perceptron requires a significantly smaller number of updates compared to Poincaré perceptron, especially when the margin is small. This parallels the results observed in Euclidean spaces [6]. Furthermore, we validate our Theorem IV.1 which provide an upper bound on the number of updates for the worst case.

Margin 1 0.1 0.01 0.001
S-perceptron 26 82 342 818
(34) (124) (548) (1,505)
perceptron 51 1,495
(65) (2,495) () ()
Theorem IV.1 594 81,749
S-perceptron 29 101 340 545
(41) (159) (748) (986)
perceptron 82 1,158
(138) (2,240) () ()
Theorem IV.1 3,670
TABLE II: Averaged number of updates for the Poincaré second-order perceptron (S-perceptron) and Poincaré perceptron for a varying margin and fixed . Bold numbers indicate the best results, with the maximum number of updates over runs shown in parenthesis. Also shown is a theoretical upper bound on the number of updates for the Poincaré perceptron based on Theorem IV.1.

Vi-B Real-World Data Sets

For real-world data sets, we choose to work with more than two collections of points. To enable -class classification for , we use

binary classifiers that are independently trained on the same training set to separate each single class from the remaining classes. For each classifier, we transform the resulting prediction scores into probabilities via the Platt scaling technique 

[26]. The predicted labels are then decided by a maximum a posteriori criteria based on the probability of each class.

The data sets of interest include Olsson’s single-cell expression profiles, containing single-cell (sc) RNA-seq expression data from classes (cell types) [24], CIFAR10, containing images from common objects falling into classes [16], Fashion-MNIST, containing Zalando’s article images with classes [39], and mini-ImageNet, containing subsamples of images from original ImageNet data set and containing classes [28]. Following the procedure described in [14, 13], we embed single-cell, CIFAR10 and Fashion-MNIST data sets into a -dimensional Poincaré disk, and mini-ImageNet into a -dimensional Poincaré ball, all with curvature (Note that our methods can be easily adapted to work with other curvature values as well), see Figure 1. Other details about the data sets including the number of samples and splitting strategy of training and testing set is described in the Appendix. Since the real-world embedded data sets are not linearly separable we only report classification results for the Poincaré SVM method.

We compare the performance of the Poincaré SVM, Hyperbolid SVM and Euclidean SVM for soft-margin classification of the above described data points. For the Poincaré SVM the reference point for each binary classifier is estimated via our technique introduced in Section IV-D. The resulting classification accuracy and time complexity are shown in Table III. From the results one can easily see that our Poincaré SVM consistently achieves the best classification accuracy over all data sets while being roughly x faster than the hyperboloid SVM. It is also worth pointing out that for most data sets embedded into the Poincaré ball model, the Euclidean SVM method does not perform well as it does not exploit the geometry of data; however, the good performance of the Euclidean SVM algorithm on mini-ImageNet can be attributed to the implicit Euclidean metric used in the embedding framework of [13]

. Note that since the Poincaré SVM and Euclidean SVM are guaranteed to achieve a global optimum, the standard deviation of classification accuracy is zero.

Algorithm Accuracy (%) Time (sec)
Olsson’s scRNA-seq Euclidean SVM
Hyperboloid SVM
Poincaré SVM
CIFAR10 Euclidean SVM
Hyperboloid SVM
Poincaré SVM
Fashion-MNIST Euclidean SVM
Hyperboloid SVM
Poincaré SVM
mini-ImageNet Euclidean SVM
Hyperboloid SVM
Poincaré SVM
TABLE III: Performance of the SVM algorithms generated based on independent trials.

Vii Conclusion

We generalized classification algorithms such as the second-order strategic perceptron and the SVM method to Poincaré balls. Our Poincaré classification algorithms comes with theoretical guarantees that ensure convergence to a global optimum. Our Poincaré classification algorithms were validated experimentally on both synthetic and real-world datasets. The developed methodology appears to be well-suited for extensions to other machine learning problems in hyperbolic spaces, an example of which is classification in mixed constant curvature spaces [34].


The work was supported by the NSF grant 1956384.


  • [1] S. Ahmadi, H. Beyhaghi, A. Blum, and K. Naggita (2021) The strategic perceptron. In Proceedings of the 22nd ACM Conference on Economics and Computation, pp. 6–25. Cited by: §I, §II, §V-B, §V-B, §V-B.
  • [2] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa (1996) The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS) 22 (4), pp. 469–483. Cited by: Convex hull algorithms in Poincaré ball model, §IV-D.
  • [3] M. Brückner and T. Scheffer (2011) Stackelberg games for adversarial prediction problems. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 547–555. Cited by: §II.
  • [4] J. W. Cannon, W. J. Floyd, R. Kenyon, W. R. Parry, et al. (1997) Hyperbolic geometry. Flavors of geometry 31 (59-115), pp. 2. Cited by: Hyperboloid perceptron.
  • [5] N. Cesa-Bianchi, A. Conconi, and C. Gentile (2004) On the generalization ability of online learning algorithms. IEEE Transactions on Information Theory 50 (9), pp. 2050–2057. Cited by: §I, §V-A.
  • [6] N. Cesa-Bianchi, A. Conconi, and C. Gentile (2005) A second-order perceptron algorithm. SIAM Journal on Computing 34 (3), pp. 640–668. Cited by: 2.Lemma, Proof of Theorem V.1, Detailed experimental setting, §I, §I, §II, §V-A, §VI-A.
  • [7] E. Chien, C. Pan, P. Tabaghi, and O. Milenkovic (2021) Highly scalable and provably accurate classification in poincaré balls. In 2021 IEEE International Conference on Data Mining (ICDM), pp. 61–70. Cited by: footnote 1.
  • [8] H. Cho, B. DeMeo, J. Peng, and B. Berger (2019) Large-margin classification in hyperbolic space. In

    International Conference on Artificial Intelligence and Statistics

    pp. 1832–1840. Cited by: §I, §I, §I, §II, §II, §II, §IV-E, §VI.
  • [9] C. Cortes and V. Vapnik (1995) Support-vector networks. Machine Learning 20 (3), pp. 273–297. Cited by: §I.
  • [10] O. Ganea, G. Bécigneul, and T. Hofmann (2018) Hyperbolic neural networks. In Advances in Neural Information Processing Systems, pp. 5345–5355. Cited by: §I, §II, Lemma III.1, Lemma III.2, §III, §III, §IV-A.
  • [11] R. L. Graham (1972) An efficient algorithm for determining the convex hull of a finite planar set. Info. Pro. Lett. 1, pp. 132–133. Cited by: §IV-D.
  • [12] M. Hardt, N. Megiddo, C. Papadimitriou, and M. Wootters (2016) Strategic classification. In Proceedings of the 2016 ACM conference on innovations in theoretical computer science, pp. 111–122. Cited by: §II.
  • [13] V. Khrulkov, L. Mirvakhabova, E. Ustinova, I. Oseledets, and V. Lempitsky (2020) Hyperbolic image embeddings. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    pp. 6418–6428. Cited by: §VI-B, §VI-B.
  • [14] A. Klimovskaia, D. Lopez-Paz, L. Bottou, and M. Nickel (2020) Poincaré maps for analyzing complex hierarchies in single-cell data. Nature communications 11 (1), pp. 1–9. Cited by: §VI-B.
  • [15] D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguná (2010) Hyperbolic geometry of complex networks. Physical Review E 82 (3). Cited by: §I.
  • [16] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §I, §VI-B.
  • [17] K. Lee, S. Maji, A. Ravichandran, and S. Soatto (2019) Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10657–10665. Cited by: §I.
  • [18] N. Linial, E. London, and Y. Rabinovich (1995) The geometry of graphs and some of its algorithmic applications. Combinatorica 15 (2), pp. 215–245. Cited by: §I.
  • [19] Q. Liu, M. Nickel, and D. Kiela (2019) Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems, pp. 8230–8241. Cited by: §II.
  • [20] E. Mathieu, C. L. Lan, C. J. Maddison, R. Tomioka, and Y. W. Teh (2019) Continuous hierarchical representations with poincaré variational auto-encoders. In Advances in Neural Information Processing Systems, Cited by: §II.
  • [21] N. Monath, M. Zaheer, D. Silva, A. McCallum, and A. Ahmed (2019)

    Gradient-based hierarchical clustering using continuous representations of trees in hyperbolic space

    In ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 714–722. Cited by: §I.
  • [22] Y. Nagano, S. Yamaguchi, Y. Fujita, and M. Koyama (2019)

    A wrapped normal distribution on hyperbolic space for gradient-based learning

    In International Conference on Machine Learning, pp. 4693–4702. Cited by: §II.
  • [23] M. Nickel and D. Kiela (2017) Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems, pp. 6338–6347. Cited by: §I.
  • [24] A. Olsson, M. Venkatasubramanian, V. K. Chaudhri, B. J. Aronow, N. Salomonis, H. Singh, and H. L. Grimes (2016) Single-cell analysis of mixed-lineage states leading to a binary cell fate choice. Nature 537 (7622), pp. 698–702. Cited by: §I, §VI-B.
  • [25] F. Papadopoulos, R. Aldecoa, and D. Krioukov (2015) Network geometry inference using common neighbors. Physical Review E 92 (2). Cited by: §I.
  • [26] J. Platt et al. (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers 10 (3), pp. 61–74. Cited by: §VI-B.
  • [27] J. G. Ratcliffe, S. Axler, and K. Ribet (2006) Foundations of hyperbolic manifolds. Vol. 149, Springer. Cited by: §IV-D.
  • [28] S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In International Conference on Learning Representations, External Links: Link Cited by: §I, §VI-B.
  • [29] F. Sala, C. De Sa, A. Gu, and C. Re (2018) Representation tradeoffs for hyperbolic embeddings. In International Conference on Machine Learning, Vol. 80, pp. 4460–4469. Cited by: §I.
  • [30] R. Sarkar (2011) Low distortion delaunay embedding of trees in hyperbolic plane. In International Symposium on Graph Drawing, pp. 355–366. Cited by: §I.
  • [31] J. Sherman and W. J. Morrison (1950) Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. The Annals of Mathematical Statistics 21 (1), pp. 124–127. Cited by: 1.Lemma, Proof of Theorem V.1.
  • [32] R. Shimizu, Y. Mukuta, and T. Harada (2021) Hyperbolic neural networks++. In International Conference on Learning Representations, External Links: Link Cited by: §I, §II, §III.
  • [33] O. Skopek, O. Ganea, and G. Bécigneul (2020) Mixed-curvature variational autoencoders. In International Conference on Learning Representations, External Links: Link Cited by: §II.
  • [34] P. Tabaghi, C. Pan, E. Chien, J. Peng, and O. Milenković (2021) Linear classifiers in product space forms. arXiv preprint arXiv:2102.10204. Cited by: Hyperboloid perceptron, Hyperboloid perceptron, §IV-D, §VII.
  • [35] A. Tifrea, G. Becigneul, and O. Ganea (2019) Poincaré glove: hyperbolic word embeddings. In International Conference on Learning Representations, External Links: Link Cited by: §I.
  • [36] A. A. Ungar (2008) Analytic hyperbolic geometry and albert einstein’s special theory of relativity. World Scientific. Cited by: §III.
  • [37] J. Vermeer (2005) A geometric interpretation of ungar’s addition and of gyration in the hyperbolic plane. Topology and its Applications 152 (3), pp. 226–242. Cited by: §III.
  • [38] M. Weber, M. Zaheer, A. S. Rawat, A. Menon, and S. Kumar (2020) Robust large-margin learning in hyperbolic space. In Advances in Neural Information Processing Systems, Cited by: Fig. 9, Hyperboloid perceptron, §I, §I, §II, §II, Fig. 4, §IV-C, §IV-C.
  • [39] H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §I, §VI-B.

Proof of Lemma iv.2

By the definition of Möbius addition, we have



Next, use and in the above expression: