DeepAI AI Chat
Log In Sign Up

Sparse Power Factorization: Balancing peakiness and sample complexity

by   Jakob Geppert, et al.

In many applications, one is faced with an inverse problem, where the known signal depends in a bilinear way on two unknown input vectors. Often at least one of the input vectors is assumed to be sparse, i.e., to have only few non-zero entries. Sparse Power Factorization (SPF), proposed by Lee, Wu, and Bresler, aims to tackle this problem. They have established recovery guarantees for a somewhat restrictive class of signals under the assumption that the measurements are random. We generalize these recovery guarantees to a significantly enlarged and more realistic signal class at the expense of a moderately increased number of measurements.


page 1

page 2

page 3

page 4


Sparse Phase Retrieval via Sparse PCA Despite Model Misspecification: A Simplified and Extended Analysis

We consider the problem of high-dimensional misspecified phase retrieval...

Dictionary-Sparse Recovery From Heavy-Tailed Measurements

The recovery of signals that are sparse not in a basis, but rather spars...

Optimal deep neural networks for sparse recovery via Laplace techniques

This paper introduces Laplace techniques for designing a neural network,...

Recovery from Power Sums

We study the problem of recovering a collection of n numbers from the ev...

Batch Sparse Recovery, or How to Leverage the Average Sparsity

We introduce a batch version of sparse recovery, where the goal is to re...

Rakeness in the design of Analog-to-Information Conversion of Sparse and Localized Signals

Design of Random Modulation Pre-Integration systems based on the restric...

A convex program for bilinear inversion of sparse vectors

We consider the bilinear inverse problem of recovering two vectors, x∈R^...