A Unified Algorithmic Framework for Multi-Dimensional Scaling

03/02/2010 ∙ by Arvind Agarwal, et al. ∙ 0

In this paper, we propose a unified algorithmic framework for solving many known variants of . Our algorithm is a simple iterative scheme with guaranteed convergence, and is modular; by changing the internals of a single subroutine in the algorithm, we can switch cost functions and target spaces easily. In addition to the formal guarantees of convergence, our algorithms are accurate; in most cases, they converge to better quality solutions than existing methods, in comparable time. We expect that this framework will be useful for a number of variants that have not yet been studied. Our framework extends to embedding high-dimensional points lying on a sphere to points on a lower dimensional sphere, preserving geodesic distances. As a compliment to this result, we also extend the Johnson-Lindenstrauss Lemma to this spherical setting, where projecting to a random O((1/^2) n)-dimensional sphere causes -distortion.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Multidimensional scaling (MDS[23, 10, 3] is a widely used method for embedding a general distance matrix into a low dimensional Euclidean space, used both as a preprocessing step for many problems, as well as a visualization tool in its own right. MDS has been studied and used in psychology since the 1930s [35, 33, 22] to help visualize and analyze data sets where the only input is a distance matrix. More recently MDS

has become a standard dimensionality reduction and embedding technique to manage the complexity of dealing with large high dimensional data sets

[8, 9, 31, 6].

In general, the problem of embedding an arbitrary distance matrix into a fixed dimensional Euclidean space with minimum error is nonconvex (because of the dimensionality constraint). Thus, in addition to the standard formulation [12], many variants of MDS have been proposed, based on changing the underlying error function [35, 8]

. There are also applications where the target space, rather than being a Euclidean space, is a manifold (e.g. a low dimensional sphere), and various heuristics for

MDS in this setting have also been proposed [13, 6].

Each such variant is typically addressed by a different heuristic, including majorization, the singular value decomposition, semidefinite programming, subgradient methods, and standard Lagrange-multipler-based methods (in both primal and dual settings). Some of these heuristics are efficient, and others are not; in general, every new variant of

MDS seems to require different ideas for efficient heuristics.

1.1 Our Work

In this paper, we present a unified algorithmic framework for solving many variants of MDS. Our approach is based on an iterative local improvement method, and can be summarized as follows: “Pick a point and move it so that the cost function is locally optimal. Repeat this process until convergence.” The improvement step reduces to a well-studied and efficient family of iterative minimization techniques, where the specific algorithm depends on the variant of MDS.

A central result of this paper is a single general convergence result for all variants of MDS that we examine. This single result is a direct consequence of the way in which we break down the general problem into an iterative algorithm combined with a point-wise optimization scheme. Our approach is generic, efficient, and simple. The high level framework can be written in 10-12 lines of MATLAB code, with individual function-specific subroutines needing only a few more lines each. Further, our approach compares well with the best methods for all the variants of MDS. In each case our method is consistently either the best performer or is close to the best, regardless of the data profile or cost function used, while other approaches have much more variable performance. Another useful feature of our method is that it is parameter-free, requiring no tuning parameters or Lagrange multipliers in order to perform at its best.

Spherical Mds.

An important application of our approach is the problem of performing spherical MDS. Spherical MDS is the problem of embedding a matrix of distances onto a (low-dimensional) sphere. Spherical MDS has applications in texture mapping and image analysis [6], and is a generalization of the spherical dimensionality reduction problem, where the goal is to map points from a high dimensional sphere onto a low-dimensional sphere. This latter problem is closely related to dimensionality reduction for finite dimensional distributions. A well-known isometric embedding takes a distribution represented as a point on the -dimensional simplex to the -dimensional sphere while preserving the Hellinger distance between distributions. A spherical dimensionality reduction result is an important step to representing high dimensional distributions in a lower-dimensional space of distributions, and will have considerable impact in domains that represent data natively as histograms or distributions, such as in document processing [29, 18, 2], image analysis [25, 11] and speech recognition [16].

Our above framework applies directly to this setting, where for the local improvement step we adapt a technique first developed by Karcher for finding geodesic means on a manifold. In addition, we prove a Johnson-Lindenstrauss-type result for the sphere; namely, that points lying on a -dimensional sphere can be embedded on a -dimensional sphere while approximately preserving the geodesic distances between pairs of points, that is, no distance changes by more than a relative -factor. This latter result can be seen as complementary to the local improvement scheme; the formal embedding result guarantees the error while being forced to use dimensions, while the local improvement strategy generates a mapping into any dimensional hypersphere but provides no formal guarantees on the error.

Summary of contributions.

The main contributions of this paper can be summarized as follows:

  • In Section 4 we present our iterative framework, illustrate how it is applied to specific MDS variants and prove a convergence result.

  • In Section 5 we present a comprehensive experimental study that compares our approach to the prior best known methods for different MDS variants.

  • In Section 6 we prove a formal dimensionality reduction result that embeds a set of points on a high-dimensional sphere into a sphere of dimension while preserving all distances to within relative error of for any .

2 Background and Existing Methods

Multidimensional scaling is a family of methods for embedding a distance matrix into a low-dimensional Euclidean space. There is a general taxonomy of MDS methods [10]; in this paper we will focus primarily the metric and generalized MDS problems.

The traditional formulation of MDS [23] assumes that the distance matrix arises from points in some -dimensional Euclidean space. Under this assumption, a simple transformation takes to a matrix of similarities , where . These similarities also arise from many psychology data sets directly [33, 35]. The problem then reduces to finding a set of points in -dimensional space such that approximates . This can be done optimally using the top

singular values and vectors from the singular value decomposition of

.

A more general approach called SMACOF that drops the Euclidean assumption uses a technique known as stress majorization [27, 12, 13]. It has been adapted to many other MDS variants as well including restrictions of data to lie on quadratic surfaces and specifically spheres [13].

Since the sum-of-squares error metric is sensitive to outliers, Cayton and Dasgupta 

[8] proposed a robust variant based on an error metric. They separate the rank and cost constraints, solving the latter using either semidefinite programming or a subgradient heuristic, followed by a singular value decomposition to enforce the rank constraints.

Many techniques have been proposed for performing spherical MDS. Among them are majorization methods ([30] and SMACOF-Q [13]), a multiresolution approach due to Elad, Keller, and Kimmel [14] and an approach based on computing the classical MDS and renormalizing [31].

Embeddings that guarantee bounded error.

A complementary line of work in dimensionality reduction fixes an error bound for every pair of distances (rather than computing an average error), and asks for the minimum dimension a data set can be embedded in while maintaining this error. The Johnson-Lindenstrauss Lemma [19] states that any collection of points in a Euclidean space can be embedded in a dimensional Euclidean space that preserves all distances within a relative error of . If the points instead define an abstract metric space, then the best possible result is an embedding into -dimensional Euclidean space that preserves distances up to a factor of . An exhaustive survey of the different methods for dimensionality reduction is beyond the scope of this paper - the reader is directed to the survey by Indyk and Matousek for more information [17].

The Johnson-Lindenstrauss Lemma can be extended to data lying on manifolds. Any manifold with “linearization dimension” (a measure of its complexity) can be embedded into a dimensional space so that all pairwise Euclidean distances between points on are distorted by at most a relative -factor [1, 32, 26]. A -dimensional sphere has linearization dimension , so this bound applies directly for preserving the chordal (i.e. Euclidean) distance between points on a sphere. The geodesic distance between points on a sphere can be interpreted as the angle between the points in radians, and a result by Magen [26] show that dimensions preserve angles to within a relative factor of (which is weaker than our result preserving the geodesic distance to within a relative factor of ).

3 Definitions

Let be an matrix representing distances between all pairs of points in a set . In general, we assume that is symmetric , although our method does not formally require this. The multidimensional scaling problem takes as input , and , and asks for a mapping from to a set of points in a -dimensional space such that the difference between the original and resulting distances is minimized.

There are many different ways to measure the difference between the sets of distances, and these can be captured by the following general function

where Err measures the discrepancy between the source and target distances, and denotes the function that measures distance in the target space.

  • : This is a general form of the MDS problem, which we refer to as fMDS.

  • : This is a robust variant of MDS called rMDS, first suggested by Cayton and Dasgupta [8].

  • or , is either chordal (c) or geodesic distance (g) on . We refer to this family of problems as {c,g}-{1,2}-sMDS.

It will be convenient to split the expression into component terms. We define

which allows us to write .

Notes.

The actual measure studied by Cayton and Dasgupta [8] is not rMDS. It is a variant which takes the absolute difference of the squared distance matrices. We call this measure MDS. Also, classical MDS does not appear in this list since it tries to minimize the error between similarities rather than distances. We refer to this measure as cMDS.

4 Algorithm

We now present our algorithm that finds a mapping minimizing . For now we assume that we are given an initial embedding to seed our algorithm. Our experiments indicate the SVD-based approach [35] is almost always the optimal way to seed the algorithm, and we use it unless specifically indicated otherwise.

  Run any MDS strategy to obtain initial seed .
  repeat
     
     for  to  do
         {this updates }
     end for
  until () {for a fixed threshold }
  return
Algorithm 1 PlaceCenter (D)

PlaceCenter operates by employing a technique from the block-relaxation class of heuristics. The cost function can be expressed as a sum of costs for each point , and so in each step of the inner loop we find the best placement for while keeping all other points fixed, using the algorithm . A key insight driving our approach is that can be implemented either iteratively or exactly for a wide class of distance functions. The process terminates when over all , invoking does not reduce the cost by more than a threshold . The algorithm takes for each iteration, since will take time and computing takes time.

4.1 A Geometric Perspective On

The routine is the heart of our algorithm. This routine finds the optimal placement of a fixed point with respect to the cost function . Set . Then the optimal placement of is given by the point minimizing the function

Note that the terms and are zero, so we can ignore their presence in the summation for ease of notation.

There is a natural geometric interpretation of , illustrated in Figure 1. Consider a sphere around the point of radius . Let be the point on this sphere that intersects the ray from towards . Then the distance . Thus, we can rewrite as

Figure 1: A geometric interpretation of the error term .

This function is well-known in combinatorial optimization as the min-sum problem. For

, finds the point minimizing the sum-of-squared distances from a collection of fixed points (the -mean), which is the centroid . For , finds the -median, the point minimizing the sum of distances from a collection of fixed points. Although there is no closed form expression for the -median, there are numerous algorithms for solving this problem both exactly [34] and approximately [4]. Methods that converge to the global optimum exist for any ; it is known that if is sufficiently larger than , then convergent methods may not exist [5].

While can be minimized optimally for error functions Err of interest, the location of the points depends on the location of the solution , which is itself unknown! This motivates an alternating optimization procedure, where the current iterate is used to compute , and then these are used as input to the min-sum problem to solve for the next value of .

  repeat
     
     for  to  do
         intersection of sphere of radius around with ray from towards
     end for
     
  until  {for a fixed threshold }
  return
Algorithm 2

4.2 Implementing Recenter

Up to this point, the description of PlaceCenter and Place has been generic, requiring no specification of Err and . In fact, all the domain-specificity of the method appears in Recenter, which solves the min-sum problem. We now demonstrate how different implementations of Recenter allow us to solve the different variants of MDS discussed above.

4.2.1 The original Mds: fMds

Recall from Section 3 that the fMDS problem is defined by and . Thus, . As mentioned earlier, the minimum of this function is attained at . Thus, merely outputs , and takes time per invocation.

4.2.2 Robust Mds: rMds

The robust MDS problem rMDS is defined by and . Minimizing the resulting function yields the famous Fermat-Weber problem, or the -median problem as it is commonly known. An exact iterative algorithm for solving this problem was given by Weiszfeld [34], and works as follows. At each step of the value is updated by

This algorithm is guaranteed to converge to the optimal solution [24, 28], and in most settings converges quadratically [21].

Other norms and distances.

If , then an iterative algorithm along the same lines as the Weiszfeld algorithm can be used to minimize optimally [5]. In practice, this is the most interesting range of values for . It is also known that for sufficiently larger than , this iterative scheme may not converge.

We also can tune PlaceCenter to the MDS problem (using squared distances) by setting .

4.2.3 Spherical Mds

Spherical MDS poses special challenges for the implementation of Recenter. Firstly, it is no longer obvious what the definition of should be, since the “spheres” surrounding points must also lie on the sphere. Secondly, consider the case where , and is given by geodesic distance on the sphere. Unlike in the case of , we no longer can solve for the minimizer of by computing the centroid of the given points, because this centroid will not in general lie on the sphere, and even computing the centroid followed by a projection onto the sphere will not guarantee optimality.

The first problem can be solved easily. Rather than draw spheres around each , we draw geodesic spheres, which are the set of points at a fixed geodesic distance from . On the sphere, this set of points can be easily described as the intersection of an appropriately chosen halfplane with the sphere. Next, instead of computing the intersection of this geodesic sphere with the ray from

towards the current estimate of

, we compute the intersection with a geodesic ray from towards .

The second problem can be addressed by prior work on computing min-sums on manifolds. Karcher [20] proposed an iterative scheme for the geodesic sum-of-squares problem that always converges as long as the points do not span the entire sphere. His work extends (for the same functions ) to points defined on more general Riemannian manifolds satisfying certain technical conditions. It runs in time per iteration.

For the robust case (), the Karcher scheme no longer works. For this case, we make use of a Weiszfeld-like adaption [15] that again works on general Riemannian manifolds, and on the sphere in particular. Like the Weiszfeld scheme, this approach takes time per iteration.

4.3 Convergence Proofs

Here we prove that each step of PlaceCenter converges as long as the recursively called procedures reduce the relevant cost functions. Convergence is defined with respect to a cost function , so that an algorithm converges if at each step decreases until the algorithm terminates.

If each call to decreases the cost , then converges with respect to .

Proof.

Let result from running an iteration of . Let , , . Then we can argue

The last line follows because and only differ at versus , and by assumption on , this sub-cost function must otherwise decrease. ∎

If each call reduces , then converges with respect to .

Proof.

First we can rewrite

Since measures the distance to the sphere , choosing to minimize (or decrease) must decrease the sum of distances to each point on each sphere . Now let be the closest point to on . and thus

where equality only holds if , in which case the algorithm terminates. ∎

5 Experiments

In this section we evaluate the performance of PlaceCenter (PC). Since PC generalizes to many different cost functions, we compare it with the best known algorithm for each cost function, if one exists. For the fMDS problem the leading algorithm is SMACOF [13]; for the MDS problem the leading algorithm is by Cayton and Dasgupta (CD) [8]. We know of no previous scalable algorithm designed for rMDS. We note that the Cayton-Dasgupta algorithm REE does not exactly solve the MDS problem. Instead, it takes a non-Euclidean distance matrix and finds a Euclidean distance matrix that minimizes the error without any rank restrictions. Thus, as suggested by the authors [8], to properly compare the algorithms, we let CD refer to running REE and then projecting the result to a -dimensional subspace using the SVD technique [35] (our plots show this projection after each step). With regards to each of these Euclidean measures we compare our algorithm with SMACOF and CD. We also compare with the popular SVD-based method [35], which solves the related cMDS problem based on similarities, by seeding all three iterative techniques with the results of the closed-form SVD-based solution.

Then we consider the family of spherical MDS problems {c,g}-{1,2}-sMDS. We compare against a version of SMACOF-Q [13] that is designed for data restricted to a low dimensional sphere, specifically for the c-2-SMDS measure. We compare this algorithm to ours under the c-2-SMDS measure (for a fair comparison with SMACOF-Q) and under the g-1-SMDS measure which is the most robust to noise.

The subsections that follow focus on individual cost measures. We then discuss the overall behavior of our algorithm in Section 5.5.

Data sets, code, and setup.

Test inputs for the algorithms are generated as follows. We start with input consisting of a random point set with points in for , with the target space with . Many data sets in practice have much larger parameters and , but we limit ourselves to this range because for larger values CD becomes prohibitively slow. The data is generated to first lie on a -dimensional subspace, and then (full-dimensional) Poisson noise is applied to all points up to a magnitude of 30% of the variation in any dimension. Finally, we construct the Euclidean distance matrix which is provided as input to the algorithms.

These data sets are Euclidean, but “close” to -dimensional. To examine the behavior of the algorithms on distance matrices that are non-Euclidean, we generate data as before in a -dimensional subspace and generate the resulting distance matrix . Then we perturb a fraction of the elements of (rather than perturbing the points) with Poisson noise. The fraction perturbed varies in the set .

All algorithms were implemented in MATLAB. For SMACOF, we used the implementation provided by Bronstein [7], and built our own implementation of SMACOF-Q around it. For all other algorithms, we used our own implementation111All of our code may be found at http://www.cs.utah.edu/~arvind/smds.html.. In all cases, we compare performance in terms of the error function Err as a function of clock time.

5.1 The rMds Problem

Figure 2 shows the cost function Err associated with rMDS plotted with respect to runtime. PlaceCenter always reaches the best local minimum, partially because only PlaceCenter can be adjusted for the rMDS problem. We also observe that the runtime is comparable to SMACOF and much faster than CD in order to get to the same Err value. Although SMACOF initially reaches a smaller cost that PC, it later converges to a larger cost because it optimizes a different cost function (fMDS).

Figure 2: (a) rMDS: Typical behavior of PC, CD and SMACOF. (b) rMDS: Variation with .

We repeat this experiment in Figure 2 for different values of (equal to ) to analyze the performance as a function of . Note that PC performs even better for lower in relation to CD. This is likely as a result of CD’s reliance on the SVD technique to reduce the dimension. At smaller , the SVD technique has a tougher job to do, and optimizes the wrong metric. Also for note that CD oscillates in its cost; this is again because the REE part finds a nearby Euclidean distance matrix which may be inherently very high dimensional and the SVD projection is very susceptible to changes in this matrix for such large . We observe that SMACOF is the fastest method to reach a low cost, but does not converge to the lowest cost value. The reason it achieves a cost close to that of PC is that for this type of data the rMDS and fMDS cost functions are fairly similar.

In Figure 3 we evaluate the effect of changing the amount of noise added to the input distance matrix , as described above. We consider two variants of the CD algorithm, one where it is seeded with an SVD-based seed (marked CD+SVD) and one where it is seeded with a random projection to a -dimensional subspace (marked CD+rand). In both cases the plots show the results of the REE algorithm after SVD-type projections back to a -dimensional space.

The CD+SVD technique consistently behaves poorly and does not improve with further iterations. This probably is because the REE component finds the closest Euclidean distance matrix which may correspond to points in a much high dimensional space, after which it is difficult for the SVD to help. The CD+rand approach does much better, likely because the random projection initializes the procedure in a reasonably low dimensional space so REE can find a relatively low dimension Euclidean distance matrix that is nearby. SMACOF is again the fastest algorithm, but with more noise, the difference between f

MDS and rMDS is larger, and thus SMACOF converges to a configuration with much higher cost than PC. We reiterate that PC consistently converges to the lowest cost solution among the different methods, and consistently is either the fastest or is comparable to the fastest algorithm. We will see this trend repeated with other cost measures as well.

Figure 3: rMDS: Variation with noise.

5.2 The fMds Problem

We next evaluate the algorithms PC, SMACOF, and CD under the fMDS distance measure. The results are very similar to the rMDS case except now both SMACOF and PC are optimizing the correct distance measure and converge to the same local minimum. SMACOF is still slightly faster that PC, but since they both run very fast, the difference is of the order of less than a second even in the very worst part of the cost/time tradeoff curve shown in Figure 4. Note that CD performs poorly under this cost function here except when . For smaller values of , the SVD step does not optimize the correct distance and for larger the REE part is likely finding an inherently very high dimensional Euclidean distance matrix, making the SVD projection very noisy.

Figure 4: (a) fMDS: Variation with . (b) fMDS: Variation with noise.

For the fMDS measure, SMACOF and PC perform very similarly under different levels of noise, both converging to similar cost functions with SMACOF running a bit faster, as seen in Figure 4. CD consistently runs slower and converges to a higher cost solution.

5.3 The Mds Problem

In this setting we would expect CD to perform consistently as well as PC because both minimize the same cost function. However, this is not always the case because CD requires the SVD step to generate a point set in . As seen in Figure 5 this becomes a problem when is small (). For medium values of , CD converges slightly faster than PC and sometimes to a slightly lower cost solution, but again for large (), the REE part has trouble handling the amount of error and the solution cost oscillates. SMACOF is again consistently the fastest to converge, but unless is very large (i.e. ) then it converges to a significantly worse solution because the fMDS and MDS error functions are different.

Figure 5: MDS: Variation with .

5.4 The Spherical Mds Problem

For the spherical MDS problem we compare PC against SMACOF-Q, an adaptation of SMACOF to restrict data points to a low-dimensional sphere, and a technique of Elad, Keller and Kimmel [14]. It turns out that the Elad et.al. approach consistently performs poorly compared to both other techniques, and so we do not display it in our reported results. SMACOF-Q basically runs SMACOF on the original data set, but also adds one additional point at the center of the sphere. The distance between any other point and is set to be thus encouraging all other points to be on a sphere, and this constraint is controlled by a weight factor , a larger implying a stronger emphasis on satisfying this constraint. Since the solution produced via this procedure may not lie on the sphere, we normalize all points to the sphere after each step for a fair comparison.

Here we compare PC against SMACOF-Q in the g-1-SMDS (Figure 6) and the c-2-SMDS (Figure 6) problem. For g-1-SMDS, PC does not converge as quickly as SMACOF-Q with small , but it reaches a better cost value. However, when SMACOF-Q is run with a larger , then PC runs faster and reaches nearly the same cost value. For our input data, the solution has similar g-1-MDS and c-1-MDS cost. When we compare SMACOF-Q with PC under c-2-MDS (Figure 6) then for an optimal choice of in SMACOF-Q, both PC and SMACOF-Q perform very similarly, converging to the same cost function and in about the same time. But for larger choices of SMACOF-Q does much worse than PC.

Figure 6: (a) g-1-SMDS: Comparing PC with SMACOF-Q for different values of penalty parameter . (b) c-2-SMDS: Comparing PC with SMACOF-Q for different values of penalty parameter .

In both cases, it is possible to find a value of that allows SMACOF-Q to match PC. However, this value is different for different settings, and varies from input to input. The key observation here is that since PC is parameter-free, it can be run regardless of the choice of input or cost function, and consistently performs well.

5.5 Summary Of Results

In summary, here are the main conclusions that can be drawn from this experimental study. Firstly, PC is consistently among the top performing methods, regardless of the choice of cost function, the nature of the input, or the level of noise in the problem. Occasionally, other methods will converge faster, but will not in general return a better quality answer, and different methods have much more variable behavior with changing inputs and noise levels.

6 A JL Lemma for Spherical Data

In this section we present a Johnson-Lindenstrauss-style bound for mapping data from a high dimensional sphere to a low-dimensional sphere while preserving the distances to within a multiplicative error of .

Consider a set of points, defining a distance matrix where the element represents the geodesic distance between and on . We seek an embedding of into that preserves pairwise distances as much as possible. For a set and a projection we say the has -distortion from if these exists a constant such that for all

For a subspace , let be the projection of onto and then scaled by . For , let be the projection to , that is for all , the corresponding point in is .

When , and , then the Johnson-Lindenstrauss (JL) Lemma [19] says that if is a random -dimensional linear subspace with , then has -distortion from with probability at least .

We now present the main result of this section. We note that recent results [1] have shown similar results for point on a variety of manifolds (including spheres) where projections preserve Euclidean distances. We reiterate that our results extend this to geodesic distances on spheres which can be seen as angle between the vectors to points . Another recent result [26] shows that dimensions preserves -distortion in angles, which is weaker than the following result.

Let , and let be a random subspace of with with . Let measure the geodesic distance on (or as appropriate). Then has -distortion from with probability at least .

This implies that if we project data points that lie on any high-dimensional sphere to a low-dimensional sphere with , then the pairwise distances are each individually preserved. Before we proceed with the proof, we require a key technical lemma.

For and ,

  1. , and

  2. .

Proof.

Let . We will show that for and , is concave. This implies that it achieves its minimum value at the boundary. Now for all , and it can be easily shown that for . This will therefore imply that in the specified range.

It remains to show that is concave in .

which is always negative for and since is increasing in the range .

This proves the first part of the lemma. For the second part, observe that can be rewritten as . The rest of the argument follows along the same lines, by showing that is concave in the desired range using that .

While the upper bound of on is not tight, it is close. The actual bound (evaluated by direct calculation) is slightly over . ∎

Proof of Theorem 6.

Let . We consider two cases, (Short Case) when and (Long Case) when .

Short Case: First consider points such that . Note that , since . By JL, we know that there exists a constant such that

Figure 7: Illustration of the bounds on when . The angle is the largest when is as small as possible (lies on inner circle) and is as large as possible (on the outer edges of the disks of diameter shifted down from dashed line of length . Bounds for and are derived symmetrically.

We need to compare the angle with that of . The largest can be is when is as small as possible, and so is as large as possible. See Figure 7. In this case, we have

which for implies

Similarly, we can show when is as small as possible (when and ), then

We can also show (via Lemma 6) that since implies we have

and

Thus, we have

Long Case: For , we consider additional points (for ) equally spaced between and on the shortest great circle connecting them. Let be the set plus all added points . Note that , so by JL we have that

For notational convenience let and . Since for then , for . This follows since the geodesic length of the great circular arc through and is at most , and . Then the chordal distance for each pair is upper bounded by the geodesic distance. Furthermore, by invoking the short case, for any pair

Then since projections preserve coplanarity (specifically, the points and for are coplanar, hence and for are coplanar), we can add together the bounds on angles which all lie on a single great circle.

References

  • [1] P. K. Agarwal, S. Har-Peled, and H. Yu. On embeddings of moving points in Euclidean space. In Proceedings 23rd Symposium on Computational Geometry, 2007.
  • [2] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, 2003.
  • [3] I. Borg and P. J. F. Groenen. Modern Multidimensional Scaling. Springer, 2005.
  • [4] P. Bose, A. Maheshwari, and P. Morin. Fast approximations for sums of distances, clustering and the Fermat–Weber problem. Comput. Geom. Theory Appl., 24(3):135–146, 2003.
  • [5] J. Brimberg and R. F. Love. Global convergence of a generalized iterative procedure for the minisum location problem with distances. Operations Research, 41(6):1153–1163, 1993.
  • [6] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Numerical Geometry of Non-Rigid Shapes. Springer, 2008.
  • [7] M. Bronstein. Accelerated MDS. http://tosca.cs.technion.ac.il/book/resources_sw.html.
  • [8] L. Cayton and S. Dasgupta. Robust Euclidean embedding. In

    ICML ’06: Proceedings of the 23rd International Conference on Machine Learning

    , pages 169–176, 2006.
  • [9] L. Chen and A. Buja. Local multidimensional scaling for nonlinear dimension reduction, graph drawing, and proximity analysis. Journal of the Americal Statistical Association, 104:209–219, 2009.
  • [10] T. F. Cox and M. A. A. Cox. Multidimensional Scaling, Second Edition. Chapman & Hall/CRC, September 2000.
  • [11] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In

    CVPR ’05: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition

    , pages 886–893, 2005.
  • [12] J. de Leeuw. Applications of convex analysis to multidimensional scaling. In J. Barra, F. Brodeau, G. Romier, and B. Van Custem, editors, Recent Developments in Statistics, pages 133–146. North Holland Publishing Company, 1977.
  • [13] J. de Leeuw and P. Mair. Multidimensional scaling using majorization: SMACOF in R. Technical Report 537, UCLA Statistics Preprints Series, 2009.
  • [14] A. E. Elad, Y. Keller, and R. Kimmel. Texture mapping via spherical multi-dimensional scaling. In R. Kimmel, N. A. Sochen, and J. Weickert, editors, Scale-Space, volume 3459 of Lecture Notes in Computer Science, pages 443–455. Springer, 2005.
  • [15] P. T. Fletcher, S. Venkatasubramanian, and S. Joshi. The Geometric Median on Riemannian Manifolds with Application to Robust Atlas Estimation. Neuroimage (invited to special issue), 45(1):S143–S152, March 2009.
  • [16] R. Gray, A. Buzo, A. Gray Jr, and Y. Matsuyama. Distortion measures for speech processing. Acoustics, Speech and Signal Processing, IEEE Transactions on, 28(4):367–376, Aug 1980.
  • [17] P. Indyk and J. Matousek. Low-distortion embeddings of finite metric spaces. In Handbook of Discrete and Computational Geometry, pages 177–196. CRC Press, 2004.
  • [18] T. Joachims.

    Learning to Classify Text Using Support Vector Machines – Methods, Theory, and Algorithms

    .
    Kluwer/Springer, 2002.
  • [19] W. B. Johnson and J. Lindenstrauss. Extensions of Lipschitz mappings into Hilbert space. Contemporary Mathematics, 26:189–206, 1984.
  • [20] H. Karcher. Riemannian center of mass and mollifier smoothing. Comm. on Pure and Appl. Math., 30:509–541, 1977.
  • [21] I. N. Katz. Local convergence in Fermat’s problem. Mathematical Programming, 6:89–104, 1974.
  • [22] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to nonmetric hypothesis. Psychometrika, 29:1–27, 1964.
  • [23] J. B. Kruskal and M. Wish. Multidimensional scaling. In E. M. Uslander, editor, Quantitative Applications in the Social Sciences, volume 11. Sage Publications, 1978.
  • [24] H. W. Kuhn. A note on Fermat’s problem. Mathematical Programming, 4:98–107, 1973.
  • [25] D. G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision, 60(2), 2004.
  • [26] A. Magen. Dimensionality reductions that preserve volumes and distance to affine spaces, and their algorithmic applications. In Proceedings of the 6th International Workshop on Randomization and Approximation Techniques, Lecture Notes In Computer Science; Vol. 2483, 2002.
  • [27] A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and Its Applications. Academic Press, 1979.
  • [28] L. M. Ostresh. On the convergence of a class of iterative methods for solving the Weber location problem. Operations Research, 26:597–609, 1978.
  • [29] F. Pereira, N. Tishby, and L. Lee. Distributional clustering of English words. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 183–190, 1993.
  • [30] R. Pietersz and P. J. F. Groenen. Rank reduction of correlation matrices by majorization. Technical Report 519086, SSRN, 2004.
  • [31] R. Pless and I. Simon. Embedding images in non-flat spaces. In Proc. of the International Conference on Imaging Science, Systems, and Technology, 2002.
  • [32] T. Sarlós. Improved approximation algorithms for large matrices via random projections. In Proceedings 47th Annual IEEE Symposium on Foundations of Computer Science, 2006.
  • [33] W. S. Torgerson. Multidimensional scaling: I. theory and method. Psychometrika, 17:401–419, 1952.
  • [34] E. Weiszfeld. Sur le point pour lequel la somme des distances de n points donnés est minimum. Tohoku Math. J., 43:355–386, 1937.
  • [35] G. Young and A. S. Householder. Discussion of a set of points in terms of their mutual distances. Psychometrika, 3:19–22, 1938.