Highly Scalable and Provably Accurate Classification in Poincare Balls

09/08/2021 ∙ by Eli Chien, et al. ∙ University of Illinois at Urbana-Champaign 17

Many high-dimensional and large-volume data sets of practical relevance have hierarchical structures induced by trees, graphs or time series. Such data sets are hard to process in Euclidean spaces and one often seeks low-dimensional embeddings in other space forms to perform required learning tasks. For hierarchical data, the space of choice is a hyperbolic space since it guarantees low-distortion embeddings for tree-like structures. Unfortunately, the geometry of hyperbolic spaces has properties not encountered in Euclidean spaces that pose challenges when trying to rigorously analyze algorithmic solutions. Here, for the first time, we establish a unified framework for learning scalable and simple hyperbolic linear classifiers with provable performance guarantees. The gist of our approach is to focus on Poincaré ball models and formulate the classification problems using tangent space formalisms. Our results include a new hyperbolic and second-order perceptron algorithm as well as an efficient and highly accurate convex optimization setup for hyperbolic support vector machine classifiers. All algorithms provably converge and are highly scalable as they have complexities comparable to those of their Euclidean counterparts. Their performance accuracies on synthetic data sets comprising millions of points, as well as on complex real-world data sets such as single-cell RNA-seq expression measurements, CIFAR10, Fashion-MNIST and mini-ImageNet.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 4

page 5

page 6

page 8

page 9

page 10

page 13

Code Repositories

PoincareLinearClassification

Official implementation of Highly Scalable and Provably Accurate Classification in Poincare Balls (ICDM regular paper 2021)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Representation learning in hyperbolic spaces has received significant interest due to its effectiveness in capturing latent hierarchical structures [10, 25, 24, 18, 20, 30]. It is known that arbitrarily low-distortion embeddings of tree-structures in Euclidean spaces is impossible even when using an unbounded number of dimensions [13]. In contrast, precise and simple embeddings are possible in the Poincaré disk, a hyperbolic space model with only two dimensions [25].

Despite their representational power, hyperbolic spaces are still lacking foundational analytical results and algorithmic solutions for a wide variety of downstream machine learning tasks. In particular, the question of designing highly-scalable classification algorithms with provable performance guarantees that exploit the structure of hyperbolic spaces remains open. While a few prior works have proposed specific algorithms for learning classifiers in hyperbolic space, they are primarily empirical in nature and do not come with theoretical convergence guarantees 

[4, 16]. The work [33] described the first attempt to establish performance guarantees for the hyperboloid perceptron, but the proposed algorithm is not transparent and fails to converge in practice. Furthermore, the methodology used does not naturally generalize to other important classification methods such as support vector machines (SVMs) [5]. Hence, a natural question arises: Is there a unified framework that allows one to generalize classification algorithms for Euclidean spaces to hyperbolic spaces, make them highly scalable and rigorously establish their performance guarantees?

We give an affirmative answer to this question for a wide variety of classification algorithms. By redefining the notion of separation hyperplanes in hyperbolic spaces, we describe the first known Poincaré ball perceptron, second-order perceptron and SVM methods with provable performance guarantees. Our perceptron algorithm resolves convergence problems associated with the perceptron method in 

[33] while the second algorithms generalize the work [3]

on second-order perceptrons in Euclidean spaces. Both methods are of importance as they represent a form of online learning in hyperbolic spaces and are basic components of hyperbolic neural networks. On the other hand, our Poincaré SVM method successfully addresses issues associated with solving and analyzing a nontrivial nonconvex optimization problem used to formulate hyperboloid SVMs in 

[4]. In the latter case, a global optimum may not be attainable using projected gradient descent methods and consequently this SVM method does not provide tangible guarantees. Our proposed algorithms may be viewed as “shallow” one-layer neural networks for hyperbolic spaces that are not only scalable but also (unlike deep networks [6, 27]) exhibit an extremely small storage-footprint. They are also of significant relevance for few-shot meta-learning [12] and for applications such as single-cell subtyping and image data processing as described in our experimental analysis (see Figure 1 for the Poincaré embedding of these data sets).

Fig. 1: Visualization of four embedded data sets: Olsson’s single-cell RNA expression data (top left, ), CIFAR10 (top right, ), Fashion-MNIST (bottom left, ) and mini-ImageNet (bottom right, ). Here stands for the number of classes and stands for the dimension of embedded Poincaré ball. Data points from mini-ImageNet are mapped into dimensions using tSNE for viewing purposes only and thus may not lie in the unit Poincaré disk. Different colors represent different classes.

For our algorithmic solutions we choose to work with the Poincaré ball model for several practical and mathematical reasons. First, this model lends itself to ease of data visualization and it is known to be conformal. Furthermore, many recent deep learning models are designed to operate on the Poincaré ball model, and our detailed analysis and experimental evaluation of the perceptron, SVM and related algorithms can improve our understanding of these learning methods. The key insight is that

tangent spaces of points in the Poincaré ball model are Euclidean. This, along with the fact that logarithmic and exponential maps are readily available to switch between different relevant spaces simplifies otherwise complicated derivations and allows for addressing classification tasks in a unified manner using convex programs. In our proofs we also use new convex hull algorithms over the Poincaré ball model and explain how to select free parameters of the classifiers.

(a) Accuracy vs
(b) Time vs
(c) Accuracy vs
(d) Time vs
Fig. 2: Classification of points in

dimensions selected uniformly at random in the Poincaré ball. The upper and lower boundaries of the shaded region represent the first and third quantile, respectively. The line shows the medium (second quantile) and the marker

indicates the mean. Detailed explanations pertaining to the test results can be found in Section IV

The proposed Poincaré perceptron, second-order perceptron and SVM method easily operate on massive synthetic data sets comprising millions of points and up to one thousand dimensions. Both the Poincaré perceptron and its second-order variant converge to an error-free result provided that the data satisfies an -margin assumption. The second-order Poincaré perceptron converges using significantly fewer iterations than the perceptron method, which matches the advantages offered by its Euclidean counterpart [3]. This is of particular interest in online learning settings and it also results in lower excess risk [2]. Our Poincaré SVM formulation, which unlike [4] comes with provable performance guarantees, also operates significantly faster than its nonconvex counterpart ( minute versus hours on a set of points, which is faster) and also offers improved classification accuracy as high as %. Real-world data experiments involve single-cell RNA expression measurements [19], CIFAR10 [11], Fashion-MNIST [34] and mini-ImageNet [23]. These data sets have challenging overlapping-class structures that are hard to process in Euclidean spaces, while Poincaré SVMs still offer outstanding classification accuracy with gains as high as compared to their Euclidean counterparts.

The paper is organized as follows. Section II describes an initial set of experimental results illustrating the scalability and high-quality performance of our SVM method compared to the corresponding Euclidean and other hyperbolic classifiers. This section also contains a discussion of prior works on hyperbolic perceptrons and SVMs that do not use the tangent space formalism and hence fail to converge and/or provide provable convergence guarantees. Section III contains a review of relevant concepts from differential geometry needed to describe the classifiers as well as our main results, analytical convergence guarantees for the proposed learners. A more detailed set of experimental results, pertaining to real-world single-cell RNA expression measurements for cell-typing and three collections of image data sets is presented in Section IV. These results illustrate the expressional power of hyperbolic spaces for hierarchical data and highlight the unique feature and performance of our techniques.

Ii Relevance and Related Work

To motivate the need for new classification methods in hyperbolic spaces we start by presenting illustrative numerical results for synthetic data sets. We compare the performance of our Poincaré SVM with the previously proposed hyperboloid SVM [4] and Euclidean SVM. The hyperbolic perceptron outlined in [33] does not converge and is hence not used in our comparative study. Rigorous descriptions of all mathematical concepts and pertinent proofs are postponed to the next sections and/or the full version of this paper.

One can clearly observe from Figure 2 that the accuracy of Euclidean SVMs may be significantly below , as the data points are not linearly separable in Euclidean but rather only in the hyperbolic space. Furthermore, the nonconvex SVM method of [4] does not scale well as the number of points increases: It takes roughly hours to complete the classification process on points while our Poincaré SVM takes only minute. Furthermore, the algorithm breaks down when the data dimension increases to due to its intrinsic non-stability. Only our Poincaré SVM can achieve nearly optimal () classification accuracy with extremely low time complexity for all data sets considered. More extensive experimental results on synthetic and real-world data can be found in Section IV.

The exposition in our subsequent sections explains what makes our classifiers as fast and accurate as demonstrated, especially when compared to the handful of other existing hyperbolic space methods. In the first line of work to address SVMs in hyperbolic spaces [4] the authors chose to work with the hyperboloid model of hyperbolic spaces which resulted in a nonconvex optimization problem formulation. The nonconvex problem was solved via projected gradient descent which is known to be able to provably find only a local optimum. In contrast, as we will show, our Poincaré SVM provably converges to a global optimum. The second related line of work [33] studied hyperbolic perceptrons and a hyperbolic version of robust large-margin classifiers for which a performance analysis was included. This work also solely focused on the hyperboloid model and the hyperbolic perceptron method outlined therein does not converge. Since we choose to work with the Poincaré ball instead of the hyperboloid model, we can resort to straightforward, universal and simple proof techniques that “transfer” the classification problem back from the Poincaré to the Euclidean space through the use of tangent spaces. Our analytical convergence results are extensively validated experimentally.

In addition to the two linear classification procedures described above, a number of hyperbolic neural networks solutions have been put forward as well [6, 27]. These networks were built upon the idea of Poincaré hyperplanes and motivated our approach for designing Poincaré-type perceptrons and SVMs. One should also point out that there are several other deep learning methods specifically designed for the Poincaré ball model, including hyperbolic graph neural networks [14]

and Variational Autoencoders 

[17, 15, 28]. Despite the excellent empirical performance of these methods theoretical guarantees are still unavailable due to the complex formalism of deep learners. Our algorithms and proof techniques illustrate for the first time why elementary components of such networks, such as perceptrons, perform exceptionally well when properly formulated for a Poincaré ball.

Iii Classification in Hyperbolic Spaces

We start with a review of basic notions pertinent to hyperbolic spaces. We then proceed to introduce the notion of separation hyperplanes in the Poincaré model of hyperbolic space which is crucial for all our subsequent derivations.

The Poincaré ball model. Despite the existence of a multitude of equivalent models for hyperbolic spaces, Poincaré ball models have received the broadest attention in the machine learning and data mining communities. This is due to the fact that the Poincaré ball model provides conformal representations of shapes and point sets, i.e., in other words, it preserves Euclidean angles of shapes. The model has also been successfully used for designing hyperbolic neural networks [6, 27]

with excellent heuristic performance. Nevertheless, the field of learning in hyperbolic spaces – under the Poincaré or other models – still remains largely unexplored.

The Poincaré ball model is a Riemannian manifold. For the absolute value of the curvature , its domain is the open ball of radius , i.e., . Here and elsewhere stands for the norm and stands for the standard inner product. The Riemannian metric is defined as , where . For , we recover the Euclidean space, i.e., . For simplicity, we focus on the case albeit our results can be generalized to hold for arbitrary . Furthermore, for a reference point , we denote its tangent space, the first order linear approximation of around , by .

In the following, we introduce Möbius addition and scalar multiplication — two basic operators on the Poincaré ball [31]. These operators represent analogues of vector addition and scalar-vector multiplication in Euclidean spaces. The Möbius addition of is defined as

(1)

Unlike its vector-space counterpart, this addition is noncommutative and nonassociative. The Möbius version of multiplication of by a scalar is defined according to

(2)

For detailed properties of these operations, see [32, 6]. The distance function in the Poincaré model is

(3)

Using Möbius operations one can also describe geodesics (analogues of straight lines in Euclidean spaces) in . The geodesics connecting two points is given by

(4)

Note that and and .

The following result explains how to construct a geodesic with a given starting point and a tangent vector.

Lemma III.1 ([6])

For any and s.t. , the geodesic starting at with tangent vector equals:

We complete the overview by introducing logarithmic and exponential maps.

Lemma III.2 (Lemma 2 in [6])

For any point the exponential map and the logarithmic map are given for and by:

(5)
(6)

The geometric interpretation of is that it gives the tangent vector of for starting point . On the other hand, returns the destination point if one starts at the point with tangent vector . Hence, a geodesic from to may be written as

(7)

See Figure 3 for the illustration.

Classification with Poincaré hyperplanes. The recent work [6] introduced the notion of a Poincaré hyperplane which generalizes the concept of a hyperplane in Euclidean space. The Poincaré hyperplane with reference point and normal vector in the above context is defined as

where . The minimum distance of a point to has the following close form

(8)

We find it useful to restate (8) so that it only depends on vectors in the tangent space as follows.

Lemma III.3

Let (and thus ), then we have

(9)

Equipped with the above definitions, we now focus on binary classification in Poincaré models. To this end, let be a set of data points, where and represent the true labels. Note that based on Lemma III.3, the decision function based on is . This is due to the fact that does not change the sign of its input and that all other terms in (9) are positive if and . For the case that either or is , and thus the sign remains unchanged. For linear classification, the goal is to learn that correctly classifies all points. For large margin classification, we further required that the learnt achieves the largest possible margin,

(10)

Iii-a Classification algorithms for Poincaré balls

(a) Poincaré disk
(b) Tangent space
Fig. 3: Figure (a): A linear classifier in Poincaré disk . Figure (b): Corresponding tangent space .

In what follows we outline the key idea behind our approach to classification and analysis of the underlying algorithms. We start with the perceptron classifier, which is the simplest approach yet of relevance for online settings. We then proceed to describe its second-order extension which offers significant reductions in the number of data passes and then introduce our SVM method which offers excellent performance with provable guarantees.

Our approach builds upon the result of Lemma III.3. For each , let . We assign a corresponding weight as

(11)

Without loss of generality, we also assume that the optimal normal vector has unit norm . Then (9) can be rewritten as

(12)

Note that and if and only if , which corresponds to the case . Nevertheless, this “border” case can be easily eliminated under a margin assumption. Hence, the problem of finding an optimal classifier becomes similar to the Euclidean case if one focuses on the tangent space of the Poincaré ball model (see Figure 3 for an illustration).

Iii-B Poincaré perceptron

We first restate the standard assumptions needed for the analysis of the perceptron algorithm in Euclidean space for the Poincaré model.

Assumption III.1
(13)
(14)
(15)

The first assumption (13) postulates the existence of a classifier that correctly classifies every points. The margin assumption is listed in (14), while (15) ensures that points lie in a bounded region.

Using (12) we can easily design the Poincaré perceptron update rule. If the mistake happens at instance (i.e., ), then

(16)

The Poincaré perceptron algorithm (16) comes with the following convergence guarantees.

Theorem III.1

Under Assumption III.1, the Poincaré perceptron (16) will correctly classify all points with at most updates, where .

To prove Theorem III.1, we need the technical lemma below.

Lemma III.4

Let . Then

(17)
(18)

If we replace with ordinary addition then we can basically interpret the result as follows: The norm of is maximized when has the same direction as . This can be easily proved by invoking the Cauchy-Schwartz inequality. However, it is nontrivial to show the result under Möbius addition.

proof. As already mentioned in Section III-A, the key idea is to work in the tangent space , in which case the Poincaré perceptron becomes similar to the Euclidean perceptron. First, we establish the boundedness of the tangent vectors in by invoking the definition of from Section III:

(19)

where (a) can be shown to hold true by involving Lemma III.4 and performing some algebraic manipulations. Details are as follows.

By Lemma III.4
By (1)

Combining with the fact that is non-decreasing in , we arrive the inequality (a).

The remainder of the analysis is similar to that of the standard Euclidean perceptron. We first lower bound as

(20)

where (b) follows from the Cauchy-Schwartz inequality and (c) is a consequence of the margin assumption (14). Next, we upper bound as

(21)

where (d) is a consequence of (III-B) and the fact that the function is nondecreasing for . Note that since and Combining (III-B) and (III-B) we complete the proof.

It is worth pointing out that the authors of [33] also designed and analyzed a different version of a hyperbolic perceptron in the hyperboloid model , where denotes the Minkowski product. Their proposed update rule is

(22)
(23)

where (23) is a normalization step. Although a convergence result was claimed in [33], we demonstrate by simple counterexamples that their hyperbolic perceptron do not converge. Moreover, we find that the algorithm does not converge to a meaningful solution for most of the data sets tested; this can be easily seen by using the proposed update rule with and with label , where are standard basis vectors (the particular choice leads to an ill-defined normalization). Other counterexamples involve normalization with complex numbers for arbitrary which is not acceptable.

Iii-C Poincaré second-order perceptron

The reason behind our interest in the second-order perceptron is that it leads to fewer mistakes during training compared to the classical perceptron. It has been shown in [2] that the error bounds have corresponding statistical risk bounds in online learning settings, which strongly motivates the use of second-order perceptrons for online classification. The performance improvement of the modified perceptron comes from accounting for second-order data information, as the standard perceptron is essentially a gradient descent (first-order) method.

Equipped with the key idea of our unified analysis, we compute the scaled tangent vectors . Following the same idea, we can extend the second order perceptron to Poincaré ball model. Our Poincare second-order perceptron is described in Algorithm LABEL:alg:SOHP which has the following theoretical guarantee.

algocf[htbp]    

Theorem III.2

For all sequences with assumption III.1, the total number of mistakes for Poincaré second order perceptron satisfies

(24)

where and

are the eigenvalues of

.

The bound in Theorem III.2 has a form that is almost identical to that of its Euclidean counterpart [3]. However, it is important to observe that the geometry of the Poincaré ball model plays a important role when evaluating the eigenvalues and . Another important observation is that our tangent-space analysis is not restricted to first-order analysis of perceptrons.

Iii-D Poincaré SVM

We conclude our theoretical analysis by describing how to formulate and solve SVMs in the Poincaré ball model with performance guarantees. For simplicity, we only consider binary classification. Techniques for dealing with multiple classes are given in Section IV.

When data points from two different classes are linearly separable the goal of SVM is to find a “max-margin hyperplane” that correctly classifies all data points and has the maximum distance from the nearest point. This is equivalent to selecting two parallel hyperplanes with maximum distance that can separate two classes. Following this approach and assuming that the data points are normalized, we can choose these two hyperplanes as and with . Points lying on these two hyperplanes are referred to as support vectors following the convention for Euclidean SVM. They are critical for the process of selecting .

Let be such that . Therefore, by Cauchy-Shwartz inequality the support vectors satisfy

(25)

Combing the above result with (12) leads to a lower bound on the distance of a data point to the hyperplane

(26)

where equality is achieved for . Thus we can obtain a max-margin classifier in the Poincaré ball that can correctly classify all data points by maximizing the lower bound in (26). Through a sequence of simple reformulations, the optimization problem of maximizing the lower bound (26) can be cast as an easily-solvable convex problem described in Theorem III.3.

Theorem III.3

Maximizing the margin (26) is equivalent to solving the convex problem of either primal (P) or dual (D) form:

(27)
(28)

which is guaranteed to achieve a global optimum with linear convergence rate by stochastic gradient descent.

The Poincaré SVM formulation from Theorem III.3 is inherently different from the only other known hyperbolic SVM approach [4]. There, the problem is nonconvex and thus does not offer convergence guarantees to a global optimum when using projected gradient descent. In contrast, since both (P) and (D) are smooth and strongly convex, variants of stochastic gradient descent is guaranteed to reach a global optimum with a linear convergence rate. This makes the Poincaré SVM numerically more stable and scalable to millions of data points. Another advantage of our formulation is that a solution to (D) directly produces the support vectors, i.e., the data points with corresponding that are critical for the classification problem.

When two data classes are not linearly separable (i.e., the problem is soft- rather than hard-margin), the goal of the SVM method is to maximize the margin while controlling the number of misclassified data points. Below we define a soft-margin Poincaré SVM that trades-off the margin and classification accuracy.

Theorem III.4

Solving soft-margin Poincaré SVM is equivalent to solving the convex problem of either primal (P) or dual (D) form:

(29)
(30)

which is guaranteed to achieve a global optimum with sublinear convergence rate by stochastic gradient descent.

The algorithmic procedure behind the soft-margin Poincaré SVM is depicted in Algorithm LABEL:alg:soft-margin-svm.

algocf[htbp]    

Iii-E Learning a reference point

Fig. 4: The effect of changing the reference point via parallel transport in corresponding tangent spaces and , for . Note that the images of data points, , change in the tangent spaces with the choice of the reference point.

So far we have tacitly assumes that the reference point is known in advance. While the reference point and normal vector can be learned in a simple manner in Euclidean spaces, this is not the case for the Poincaré ball model due to the non-linearity of its logarithmic map and Möbius addition (Figure 4).

Fig. 5: Learning a reference point . Step 1 (left): Construct convex hull for each cluster. Black lines are geodesics defining the surface of convex hull. Step 2 (right): Find a minimum distance pair and choose as their midpoint.

Importantly, we have the following observation: A hyperplane correctly classifies all points iff it separates their convex hulls (the definition of “convexity” in hyperbolic spaces follows from replacing lines with geodesics [22]). Hence we can easily generalize known convex hull algorithms to the Poincaré ball model, including the Graham scan [7] and Quickhull [1] (see the Appendix for further discussion). Note that the described convex hull algorithm has complexity and is hence very efficient. Next, denote the set of points on the convex hull of the class labeled by () as (). A minimum distance pair can be found as

(31)

Then, the hyperbolic midpoint of corresponds to the reference point (see Figure 5). Our strategy of learning along with algorithms introduced in Section III-A works well on real world data set, see Section IV.

Iv Experiments

To put the performance of our proposed algorithms in the context of existing works on hyperbolic classification, we perform extensive numerical experiments on both synthetic and real-world data sets. In particular, we compare our Poincaré perceptron, second-order perceptron and SVM method with the hyperboloid SVM of [4] and the Euclidean SVM. Detailed descriptions of the experimental settings are provided in the Appendix.

Fig. 6: (left) Decision boundaries for different choices of . (right) Geometry of different choices of margin .

Iv-a Synthetic data sets

(a) Accuracy vs ,
(b) Accuracy vs ,
(c) Time vs ,
(d) Time vs ,
(e) Accuracy vs ,
(f) Accuracy vs ,
(g) Time vs ,
(h) Time vs ,
(i) Accuracy vs Margin ,
(j) Accuracy vs Margin ,
(k) Time vs Margin ,
(l) Time vs Margin ,
Fig. 7: Experiments on synthetic data sets and . The upper and lower boundaries of the shaded region represent the first and third quantile, respectively. The line itself corresponds to the medium (second quantile) and the marker indicates the mean. The first two columns plot the accuracy of the SVM methods while the last two columns plot the corresponding time complexity. For the first row we vary the dimension from to . For the second row we vary the number of points from to . In the third row we vary the margin from to . The default setting for is .

In the first set of experiments, we generate points uniformly at random on the Poincaré disk and perform binary classification task. To satisfy our Assumption III.1, we restrict the points to have norm at most (boundedness condition). For a decision boundary , we remove all points within margin (margin assumption). Note that the decision boundary looks more “curved” when is larger, which makes it more different from the Euclidean case (Figure 6). When then the optimal decision boundary is also linear in Euclidean sense. On the other hand, if we choose too large then it is likely that all points are assigned with the same label. Hence, we consider the case and and let the direction of to be generated uniformly at random. Results for case are demonstrated in Figure 2 while the others are in Figure 7. All results are averaged over independent runs.

We first vary from to and fix . The accuracy and time complexity are shown in Figure 7 (e)-(h). One can clearly observe that the Euclidean SVM fails to achieve a accuracy as data points are not linearly separable in the Euclidean, but only in hyperbolic sense. This phenomenon becomes even more obvious when increases due to the geometry of Poincaré disk (Figure 6). On the other hand, the hyperboloid SVM is not scalable to accommodate such a large number of points. As an example, it takes hours ( hours for the case , Figure 2) to process points; in comparison, our Poincaré SVM only takes minute. Hence, only the Poincaré SVM is highly scalable and offers the highest accuracy achievable.

Next we vary the margin from to and fix . The accuracy and time complexity are shown in Figure 7 (i)-(l). As the margin reduces, the accuracy of the Euclidean SVM deteriorates. This is again due to the geometry of Poincaré disk (Figure 6) and the fact that the classifier needs to be tailor-made for hyperbolic spaces. Interestingly, the hyperboloid SVM performs poorly for and a margin value , as its accuracy is significantly below . This may be attributed to the fact that the cluster sizes are highly unbalanced, which causes numerical issue with the underlying optimization process. Once again, the Poincaré SVM outperforms all other methods in terms of accuracy and time complexity.

Finally, we examined the influence of the data point dimension on the performance of the classifiers. To this end, we varied the dimension from to and fixed . The accuracy and time complexity results are shown in Figure 7 (a)-(d). Surprisingly, the hyperboloid SVM fails to learn well when is large and close to . This again reaffirms the importance of the convex formulation of our Poincaré SVM, which is guaranteed to converge to a global optimum independent of the choice and . We also find that Euclidean SVM improves its performance as increases, albeit at the price of high execution time.

We now we turn our attention to the evaluation of our perceptron algorithms which are online algorithms in nature. Results are summarized in Table I. There, one can observe that the Poincaré second-order perceptron requires a significantly smaller number of updates compared to Poincaré perceptron, especially when the margin is small. This parallels the results observed in Euclidean spaces [3]. Furthermore, we validate our Theorem III.1 which provide an upper bound on the number of updates for the worst case.

Margin 1 0.1 0.01 0.001
S-perceptron 26 82 342 818
(34) (124) (548) (1,505)
perceptron 51 1,495
(65) (2,495) () ()
Theorem III.1 594 81,749
S-perceptron 29 101 340 545
(41) (159) (748) (986)
perceptron 82 1,158
(138) (2,240) () ()
Theorem III.1 3,670
TABLE I: Averaged number of updates for the Poincaré second-order perceptron (S-perceptron) and Poincaré perceptron for a varying margin and fixed . Bold numbers indicate the best results, with the maximum number of updates over runs shown in parenthesis. Also shown is a theoretical upper bound on the number of updates for the Poincaré perceptron based on Theorem III.1.

Iv-B Real-world data sets

For real-world data sets, we choose to work with more than two collections of points. To enable -class classification for , we use

binary classifiers that are independently trained on the same training set to separate each single class from the remaining classes. For each classifier, we transform the resulting prediction scores into probabilities via the Platt scaling technique 

[21]. The predicted labels are then decided by a maximum a posteriori criteria based on the probability of each class.

The data sets of interest include Olsson’s single-cell expression profiles, containing single-cell (sc) RNA-seq expression data from classes (cell types) [19], CIFAR10, containing images from common objects falling into classes [11], Fashion-MNIST, containing Zalando’s article images with classes [34], and mini-ImageNet, containing subsamples of images from original ImageNet data set and containing classes [23]. Following the procedure deccribed in [9, 8], we embed single-cell, CIFAR10 and Fashion-MNIST data sets into a -dimensional Poincaré disk, and mini-ImageNet into a -dimensional Poincaré ball, all with curvature (Note that our methods can be easily adapted to work with other curvature values as well), see Figure 1. Other details about the data sets including the number of samples and splitting strategy of training and testing set is described in the Appendix. Since the real-world embedded data sets are not linearly separable we only report classification results for the Poincaré SVM method.

We compare the performance of the Poincaré SVM, Hyperbolid SVM and Euclidean SVM for soft-margin classification of the above described data points. For the Poincaré SVM the reference point

for each binary classifier is estimated via our technique introduced in Section 

III-E. The resulting classification accuracy and time complexity are shown in Table II. From the results one can easily see that our Poincaré SVM consistently achieves the best classification accuracy over all data sets while being roughly x faster than the hyperboloid SVM. It is also worth pointing out that for most data sets embedded into the Poincaré ball model, the Euclidean SVM method does not perform well as it does not exploit the geometry of data; however, the good performance of the Euclidean SVM algorithm on mini-ImageNet can be attributed to the implicit Euclidean metric used in the embedding framework of [8]

. Note that since the Poincaré SVM and Euclidean SVM are guaranteed to achieve a global optimum, the standard deviation of classification accuracy is zero.

Algorithm Accuracy (%) Time (sec)
Olsson’s scRNA-seq Euclidean SVM
Hyperboloid SVM
Poincaré SVM
CIFAR10 Euclidean SVM
Hyperboloid SVM
Poincaré SVM
Fashion-MNIST Euclidean SVM
Hyperboloid SVM
Poincaré SVM
mini-ImageNet Euclidean SVM
Hyperboloid SVM
Poincaré SVM
TABLE II: Performance of the SVM algorithms generated based on independent trials.

V Conclusion

We generalize classification algorithms such as (second-order) perceptron and SVM to Poincaré balls. Our Poincaré classification algorithms comes with theoretical guarantee of converging to global optimum which improves the previous attempts in the literature. We validate our Poincaré classification algorithms with experiments on both synthetic and real-world datasets. It shows that our method is highly scalable and accurate, which aligns with our theoretical results. Our methodology appears to be amenable for extensions to other machine learning problems in hyperbolic geometry. One example, pertaining to classification in mixed constant curvature spaces, can be found in [29].

Acknowledgment

The work was supported in part by the NSF grant 1956384.

References

  • [1] C. B. Barber, D. P. Dobkin, and H. Huhdanpaa (1996) The quickhull algorithm for convex hulls. ACM Transactions on Mathematical Software (TOMS) 22 (4), pp. 469–483. Cited by: Convex hull algorithms in Poincaré ball model, §III-E.
  • [2] N. Cesa-Bianchi, A. Conconi, and C. Gentile (2004) On the generalization ability of online learning algorithms. IEEE Transactions on Information Theory 50 (9), pp. 2050–2057. Cited by: §I, §III-C.
  • [3] N. Cesa-Bianchi, A. Conconi, and C. Gentile (2005) A second-order perceptron algorithm. SIAM Journal on Computing 34 (3), pp. 640–668. Cited by: 2.Lemma, Proof of Theorem III.2, Detailed experimental setting, §I, §I, §III-C, §IV-A.
  • [4] H. Cho, B. DeMeo, J. Peng, and B. Berger (2019) Large-margin classification in hyperbolic space. In

    International Conference on Artificial Intelligence and Statistics

    ,
    pp. 1832–1840. Cited by: §I, §I, §I, §II, §II, §II, §III-D, §IV.
  • [5] C. Cortes and V. Vapnik (1995) Support-vector networks. Machine Learning 20 (3), pp. 273–297. Cited by: §I.
  • [6] O. Ganea, G. Bécigneul, and T. Hofmann (2018) Hyperbolic neural networks. In Advances in Neural Information Processing Systems, pp. 5345–5355. Cited by: §I, §II, Lemma III.1, Lemma III.2, §III, §III, §III.
  • [7] R. L. Graham (1972) An efficient algorithm for determining the convex hull of a finite planar set. Info. Pro. Lett. 1, pp. 132–133. Cited by: §III-E.
  • [8] V. Khrulkov, L. Mirvakhabova, E. Ustinova, I. Oseledets, and V. Lempitsky (2020) Hyperbolic image embeddings. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    ,
    pp. 6418–6428. Cited by: §IV-B, §IV-B.
  • [9] A. Klimovskaia, D. Lopez-Paz, L. Bottou, and M. Nickel (2020) Poincaré maps for analyzing complex hierarchies in single-cell data. Nature communications 11 (1), pp. 1–9. Cited by: §IV-B.
  • [10] D. Krioukov, F. Papadopoulos, M. Kitsak, A. Vahdat, and M. Boguná (2010) Hyperbolic geometry of complex networks. Physical Review E 82 (3). Cited by: §I.
  • [11] A. Krizhevsky, G. Hinton, et al. (2009) Learning multiple layers of features from tiny images. Cited by: §I, §IV-B.
  • [12] K. Lee, S. Maji, A. Ravichandran, and S. Soatto (2019) Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10657–10665. Cited by: §I.
  • [13] N. Linial, E. London, and Y. Rabinovich (1995) The geometry of graphs and some of its algorithmic applications. Combinatorica 15 (2), pp. 215–245. Cited by: §I.
  • [14] Q. Liu, M. Nickel, and D. Kiela (2019) Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems, pp. 8230–8241. Cited by: §II.
  • [15] E. Mathieu, C. L. Lan, C. J. Maddison, R. Tomioka, and Y. W. Teh (2019) Continuous hierarchical representations with poincaré variational auto-encoders. In Advances in Neural Information Processing Systems, Cited by: §II.
  • [16] N. Monath, M. Zaheer, D. Silva, A. McCallum, and A. Ahmed (2019)

    Gradient-based hierarchical clustering using continuous representations of trees in hyperbolic space

    .
    In ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 714–722. Cited by: §I.
  • [17] Y. Nagano, S. Yamaguchi, Y. Fujita, and M. Koyama (2019)

    A wrapped normal distribution on hyperbolic space for gradient-based learning

    .
    In International Conference on Machine Learning, pp. 4693–4702. Cited by: §II.
  • [18] M. Nickel and D. Kiela (2017) Poincaré embeddings for learning hierarchical representations. In Advances in Neural Information Processing Systems, pp. 6338–6347. Cited by: §I.
  • [19] A. Olsson, M. Venkatasubramanian, V. K. Chaudhri, B. J. Aronow, N. Salomonis, H. Singh, and H. L. Grimes (2016) Single-cell analysis of mixed-lineage states leading to a binary cell fate choice. Nature 537 (7622), pp. 698–702. Cited by: §I, §IV-B.
  • [20] F. Papadopoulos, R. Aldecoa, and D. Krioukov (2015) Network geometry inference using common neighbors. Physical Review E 92 (2). Cited by: §I.
  • [21] J. Platt et al. (1999) Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers 10 (3), pp. 61–74. Cited by: §IV-B.
  • [22] J. G. Ratcliffe, S. Axler, and K. Ribet (2006) Foundations of hyperbolic manifolds. Vol. 149, Springer. Cited by: §III-E.
  • [23] S. Ravi and H. Larochelle (2017) Optimization as a model for few-shot learning. In International Conference on Learning Representations, External Links: Link Cited by: §I, §IV-B.
  • [24] F. Sala, C. De Sa, A. Gu, and C. Re (2018) Representation tradeoffs for hyperbolic embeddings. In International Conference on Machine Learning, Vol. 80, pp. 4460–4469. Cited by: §I.
  • [25] R. Sarkar (2011) Low distortion delaunay embedding of trees in hyperbolic plane. In International Symposium on Graph Drawing, pp. 355–366. Cited by: §I.
  • [26] J. Sherman and W. J. Morrison (1950) Adjustment of an inverse matrix corresponding to a change in one element of a given matrix. The Annals of Mathematical Statistics 21 (1), pp. 124–127. Cited by: 1.Lemma, Proof of Theorem III.2.
  • [27] R. Shimizu, Y. Mukuta, and T. Harada (2021) Hyperbolic neural networks++. In International Conference on Learning Representations, External Links: Link Cited by: §I, §II, §III.
  • [28] O. Skopek, O. Ganea, and G. Bécigneul (2020) Mixed-curvature variational autoencoders. In International Conference on Learning Representations, External Links: Link Cited by: §II.
  • [29] P. Tabaghi, E. Chien, C. Pan, and O. Milenković (2021) Linear classifiers in mixed constant curvature spaces. arXiv preprint arXiv:2102.10204. Cited by: §V.
  • [30] A. Tifrea, G. Becigneul, and O. Ganea (2019) Poincaré glove: hyperbolic word embeddings. In International Conference on Learning Representations, External Links: Link Cited by: §I.
  • [31] A. A. Ungar (2008) Analytic hyperbolic geometry and albert einstein’s special theory of relativity. World Scientific. Cited by: §III.
  • [32] J. Vermeer (2005) A geometric interpretation of ungar’s addition and of gyration in the hyperbolic plane. Topology and its Applications 152 (3), pp. 226–242. Cited by: §III.
  • [33] M. Weber, M. Zaheer, A. S. Rawat, A. Menon, and S. Kumar (2020) Robust large-margin learning in hyperbolic space. In Advances in Neural Information Processing Systems, Cited by: §I, §I, §II, §II, §III-B.
  • [34] H. Xiao, K. Rasul, and R. Vollgraf (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. Cited by: §I, §IV-B.

Proof of Lemma iii.4

By the definition of Möbius addition, we have

Thus,

(32)

Next, use and in the above expression:

(33)

The function in (33) attains its maximum at and . We also observe that (32) is symmetric in . Thus, the same argument holds for .

Proof of Theorem iii.2

We generalize the arguments in [3] to hyperbolic spaces. Let . The matrix can be recursively computed from , or equivalently . Without loss of generality, let be the time index of the error.

where is due to the Sherman-Morrison formula [26] below.

Lemma .1 ([26])

Let be an arbitrary positive-definite matrix. Let . Then is also a positive-definite matrix and

(34)

Note that the inequality holds since is a positive-definite matrix and thus so is its inverse. Therefore, we have

where are the eigenvalues of . Claim follows from Lemma .2 while is due to the fact .

Lemma .2 ([3])

Let be an arbitrary positive-semidefinite matrix. Let and . Then

(35)

where is the product of non-zero eigenvalues of .

This leads to the upper bound for . For the lower bound, we have

Also, recall . Combining the bounds we get