Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss

by   Aleksandros Sobczyk, et al.

A classical result of Johnson and Lindenstrauss states that a set of n high dimensional data points can be projected down to O(log n/ϵ^2) dimensions such that the square of their pairwise distances is preserved up to a small distortion ϵ∈(0,1). It has been proved that the JL lemma is optimal for the general case, therefore, improvements can only be explored for special cases. This work aims to improve the ϵ^-2 dependency based on techniques inspired by the Hutch++ Algorithm , which reduces ϵ^-2 to ϵ^-1 for the related problem of implicit matrix trace estimation. For ϵ=0.01, for example, this translates to 100 times less matrix-vector products in the matrix-vector query model to achieve the same accuracy as other previous estimators. We first present an algorithm to estimate the Euclidean lengths of the rows of a matrix. We prove element-wise probabilistic bounds that are at least as good as standard JL approximations in the worst-case, but are asymptotically better for matrices with decaying spectrum. Moreover, for any matrix, regardless of its spectrum, the algorithm achieves ϵ-accuracy for the total, Frobenius norm-wise relative error using only O(ϵ^-1) queries. This is a quadratic improvement over the norm-wise error of standard JL approximations. We finally show how these results can be extended to estimate the Euclidean distances between data points and to approximate the statistical leverage scores of a tall-and-skinny data matrix, which are ubiquitous for many applications. Proof-of-concept numerical experiments are presented to validate the theoretical analysis.


page 1

page 2

page 3

page 4


Improved variants of the Hutch++ algorithm for trace estimation

This paper is concerned with two improved variants of the Hutch++ algori...

XTrace: Making the most of every sample in stochastic trace estimation

The implicit trace estimation problem asks for an approximation of the t...

Norm and trace estimation with random rank-one vectors

A few matrix-vector multiplications with random vectors are often suffic...

Approximate and discrete Euclidean vector bundles

We introduce ε-approximate versions of the notion of Euclidean vector bu...

How can classical multidimensional scaling go wrong?

Given a matrix D describing the pairwise dissimilarities of a data set, ...

Convex Optimization Learning of Faithful Euclidean Distance Representations in Nonlinear Dimensionality Reduction

Classical multidimensional scaling only works well when the noisy distan...

On Euclidean Norm Approximations

Euclidean norm calculations arise frequently in scientific and engineeri...

Please sign up or login with your details

Forgot password? Click here to reset