Bulk Johnson-Lindenstrauss Lemmas

07/15/2023
by   Michael P. Casey, et al.
0

For a set X of N points in ℝ^D, the Johnson-Lindenstrauss lemma provides random linear maps that approximately preserve all pairwise distances in X – up to multiplicative error (1±ϵ) with high probability – using a target dimension of O(ϵ^-2log(N)). Certain known point sets actually require a target dimension this large – any smaller dimension forces at least one distance to be stretched or compressed too much. What happens to the remaining distances? If we only allow a fraction η of the distances to be distorted beyond tolerance (1±ϵ), we show a target dimension of O(ϵ^-2log(4e/η)log(N)/R) is sufficient for the remaining distances. With the stable rank of a matrix A as ‖A‖_F^2/‖A‖^2, the parameter R is the minimal stable rank over certain log(N) sized subsets of X-X or their unit normalized versions, involving each point of X exactly once. The linear maps may be taken as random matrices with i.i.d. zero-mean unit-variance sub-gaussian entries. When the data is sampled i.i.d. as a given random vector ξ, refined statements are provided; the most improvement happens when ξ or the unit normalized ξ-ξ' is isotropic, with ξ' an independent copy of ξ, and includes the case of i.i.d. coordinates.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2017

Breaking the 3/2 barrier for unit distances in three dimensions

We prove that every set of n points in R^3 spans O(n^295/197+ϵ) unit dis...
research
04/06/2011

Compressed Sensing and Matrix Completion with Constant Proportion of Corruptions

We improve existing results in the field of compressed sensing and matri...
research
07/30/2023

Polytopes with Bounded Integral Slack Matrices Have Sub-Exponential Extension Complexity

We show that any bounded integral function f : A × B ↦{0,1, …, Δ} with r...
research
01/28/2020

Sub-Gaussian Matrices on Sets: Optimal Tail Dependence and Applications

Random linear mappings are widely used in modern signal processing, comp...
research
08/18/2021

Dimension-free Bounds for Sums of Independent Matrices and Simple Tensors via the Variational Principle

We consider the deviation inequalities for the sums of independent d by ...
research
04/17/2019

Stable recovery and the coordinate small-ball behaviour of random vectors

Recovery procedures in various application in Data Science are based on ...
research
01/08/2019

Comments on "Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?"

In a recently published paper [1], it is shown that deep neural networks...

Please sign up or login with your details

Forgot password? Click here to reset