Exponentially Improved Dimensionality Reduction for ℓ_1: Subspace Embeddings and Independence Testing

04/27/2021
by   Yi Li, et al.
0

Despite many applications, dimensionality reduction in the ℓ_1-norm is much less understood than in the Euclidean norm. We give two new oblivious dimensionality reduction techniques for the ℓ_1-norm which improve exponentially over prior ones: 1. We design a distribution over random matrices S ∈ℝ^r × n, where r = 2^poly(d/(εδ)), such that given any matrix A ∈ℝ^n × d, with probability at least 1-δ, simultaneously for all x, SAx_1 = (1 ±ε)Ax_1. Note that S is linear, does not depend on A, and maps ℓ_1 into ℓ_1. Our distribution provides an exponential improvement on the previous best known map of Wang and Woodruff (SODA, 2019), which required r = 2^2^Ω(d), even for constant ε and δ. Our bound is optimal, up to a polynomial factor in the exponent, given a known 2^√(d) lower bound for constant ε and δ. 2. We design a distribution over matrices S ∈ℝ^k × n, where k = 2^O(q^2)(ε^-1 q log d)^O(q), such that given any q-mode tensor A ∈ (ℝ^d)^⊗ q, one can estimate the entrywise ℓ_1-norm A_1 from S(A). Moreover, S = S^1 ⊗ S^2 ⊗⋯⊗ S^q and so given vectors u_1, …, u_q ∈ℝ^d, one can compute S(u_1 ⊗ u_2 ⊗⋯⊗ u_q) in time 2^O(q^2)(ε^-1 q log d)^O(q), which is much faster than the d^q time required to form u_1 ⊗ u_2 ⊗⋯⊗ u_q. Our linear map gives a streaming algorithm for independence testing using space 2^O(q^2)(ε^-1 q log d)^O(q), improving the previous doubly exponential (ε^-1log d)^q^O(q) space bound of Braverman and Ostrovsky (STOC, 2010).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro