Stable Recovery Of Sparse Vectors From Random Sinusoidal Feature Maps

01/23/2017
by   Mohammadreza Soltani, et al.
0

Random sinusoidal features are a popular approach for speeding up kernel-based inference in large datasets. Prior to the inference stage, the approach suggests performing dimensionality reduction by first multiplying each data vector by a random Gaussian matrix, and then computing an element-wise sinusoid. Theoretical analysis shows that collecting a sufficient number of such features can be reliably used for subsequent inference in kernel classification and regression. In this work, we demonstrate that with a mild increase in the dimension of the embedding, it is also possible to reconstruct the data vector from such random sinusoidal features, provided that the underlying data is sparse enough. In particular, we propose a numerically stable algorithm for reconstructing the data vector given the nonlinear features, and analyze its sample complexity. Our algorithm can be extended to other types of structured inverse problems, such as demixing a pair of sparse (but incoherent) vectors. We support the efficacy of our approach via numerical experiments.

READ FULL TEXT
research
12/12/2017

Sparse Phase Retrieval via Sparse PCA Despite Model Misspecification: A Simplified and Extended Analysis

We consider the problem of high-dimensional misspecified phase retrieval...
research
02/07/2020

Stable Sparse Subspace Embedding for Dimensionality Reduction

Sparse random projection (RP) is a popular tool for dimensionality reduc...
research
01/14/2021

Joint Dimensionality Reduction for Separable Embedding Estimation

Low-dimensional embeddings for data from disparate sources play critical...
research
09/21/2019

Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference

In this paper, we study sparse group Lasso for high-dimensional double s...
research
05/30/2018

On the Spectrum of Random Features Maps of High Dimensional Data

Random feature maps are ubiquitous in modern statistical machine learnin...
research
12/21/2022

A Theoretical Study of The Effects of Adversarial Attacks on Sparse Regression

This paper analyzes ℓ_1 regularized linear regression under the challeng...
research
06/11/2015

Random Maxout Features

In this paper, we propose and study random maxout features, which are co...

Please sign up or login with your details

Forgot password? Click here to reset