Non-Convex Compressed Sensing with Training Data

01/20/2021
by   G. Welper, et al.
0

Efficient algorithms for the sparse solution of under-determined linear systems Ax = b are known for matrices A satisfying suitable assumptions like the restricted isometry property (RIP). Without such assumptions little is known and without any assumptions on A the problem is NP-hard. A common approach is to replace ℓ_1 by ℓ_p minimization for 0 < p < 1, which is no longer convex and typically requires some form of local initial values for provably convergent algorithms. In this paper, we consider an alternative, where instead of suitable initial values we are provided with extra training problems Ax = B_l, l=1, …, p that are related to our compressed sensing problem. They allow us to find the solution of the original problem Ax = b with high probability in the range of a one layer linear neural network with comparatively few assumptions on the matrix A.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro