DeepAI

# A convex program for bilinear inversion of sparse vectors

We consider the bilinear inverse problem of recovering two vectors, x∈R^L and w∈R^L, from their entrywise product. We consider the case where x and w have known signs and are sparse with respect to known dictionaries of size K and N, respectively. Here, K and N may be larger than, smaller than, or equal to L. We introduce ℓ_1-BranchHull, which is a convex program posed in the natural parameter space and does not require an approximate solution or initialization in order to be stated or solved. We study the case where x and w are S_1- and S_2-sparse with respect to a random dictionary and present a recovery guarantee that only depends on the number of measurements as L≥Ω(S_1+S_2)^2(K+N). Numerical experiments verify that the scaling constant in the theorem is not too large. One application of this problem is the sweep distortion removal task in dielectric imaging, where one of the signals is a nonnegative reflectivity, and the other signal lives in a known subspace, for example that given by dominant wavelet coefficients. We also introduce a variants of ℓ_1-BranchHull for the purposes of tolerating noise and outliers, and for the purpose of recovering piecewise constant signals. We provide an ADMM implementation of these variants and show they can extract piecewise constant behavior from real images.

• 16 publications
• 17 publications
• 20 publications
• 5 publications
06/25/2019

### Bilinear Compressed Sensing under known Signs via Convex Programming

We consider the bilinear inverse problem of recovering two vectors, x∈R^...
06/21/2018

### Blind Deconvolutional Phase Retrieval via Convex Programming

We consider the task of recovering two real or complex m-vectors from ph...
03/26/2020

### Robust Recovery of Sparse Nonnegative Weights from Mixtures of Positive-Semidefinite Matrices

We consider a structured estimation problem where an observed matrix is ...
04/26/2019

### Simultaneous Phase Retrieval and Blind Deconvolution via Convex Programming

We consider the task of recovering two real or complex m-vectors from ph...
08/31/2018

### Bilinear Recovery using Adaptive Vector-AMP

We consider the problem of jointly recovering the vector b and the matri...
02/13/2019

### Simultaneous Sparse Recovery and Blind Demodulation

The task of finding a sparse signal decomposition in an overcomplete dic...
12/29/2018

### Monocular 3D Pose Recovery via Nonconvex Sparsity with Theoretical Analysis

For recovering 3D object poses from 2D images, a prevalent method is to ...

## 1 Introduction

We study the problem of recovering two unknown signals and in from observations , where is a bilinear operator. Let and such that and with and . Let the bilinear operator satisfy

 y=A(w,x)=w⊙x, (1)

where denotes entrywise product. The bilinear inverse problem (BIP) we consider is to find and from , , and , up to the inherent scaling ambiguity.

BIPs, in general, have many applications in signal processing and machine learning and include fundamental practical problems like phase retrieval

[12, 7, 9], blind deconvolution [3, 29, 16, 2], non-negative matrix factorization [14, 20], self-calibration [24], blind source separation [11], dictionary learning [31], etc. These problems are in general challenging and suffer from identifiability issues that make the solution set non-unique and non-convex. A common identifiability issue, also shared by the BIP in (1), is the scaling ambiguity. In particular, if solves a BIP, then so does for any nonzero . In this paper, we resolve this scaling ambiguity by finding the point in the solution set closest to the origin with respect to the norm.

Another identifiability issue of the BIP in (1) is if solves (1), then so does , where is the vector of ones. Unlike prior studies like [3], where the signals are assumed to live in known subspaces, we resolve this structural ambiguity by additionally assuming the signals are sparse with respect to those known basis. Natural choices for such bases include the standard basis, the Discrete Cosine Transform (DCT) basis, and a wavelet basis.

Recent work on sparse BIP, specifically sparse blind deconvolution, in [21] provides an exact recovery guarantee of the sparse vectors and that satisfy a "peakiness" condition, i.e. for some absolute constant

. This result holds with high probability for random measurements if the number of measurement, up to a log factor, satisfy

. For general vectors without the peakiness condition, the same work shows exact recover is possible if the number of measurements, up to a log factor, satisfy .

The main contribution of this paper is to introduce an algorithm for the sparse BIP described in (1) that recovers the sparse vectors, which does not have to satisfy the peakiness condition and does not need initialization, under near optimal sample complexity. Precisely, we present a convex program stated in the natural parameter space, which in the noiseless setting with random and , exactly recovers the sparse vectors with at most combined nonzero entries with high probability if the number measurements satisfy .

### 1.1 Convex program and main results

We introduce a convex program written in the natural parameter space for the bilinear inverse problem described in (1). Let with and . Let , and , where and are the th row of and . Also, let and . The convex program we consider to recover is the -BranchHull program

 ℓ1-BH:minimizeh∈RK,m∈RN ∥h∥1+∥m∥1 subject to  sℓ(b⊺ℓhc⊺ℓm)≥|yℓ| (2) tℓb⊺ℓh≥0,ℓ=1,2,…,L.

The motivation for the feasible set in program (2) follows from the observation that each measurement defines a hyperbola in . As shown in Figure (3), the sign information restricts to one of the branch of the hyperbola. The feasible set in (2) corresponds to the convex hull of particular branches of the hyperbola for each . This also implies that the feasible set is convex as it is the intersection of convex sets.

The objective function in (2) is an minimization over that finds a sparse point with . This scaling in the minimizer of (2) is justified by the observation that is feasible for any non-zero . So, the minimizer of (2), under successful recovery, is .

Our main result is that under the structural assumptions that and live in random subspaces with and containing at most and non zero entries, the -BranchHull program (2) recovers , and (to within the scaling ambiguity) with high probability, provided the number of measurements, up to log factors, satisfy .

###### Theorem 1.

Suppose we observe the pointwise product of two vectors , and through a bilinear measurement model in (1), where , and are standard Gaussian random matrices. Then the -BranchHull program (2) uniquely recovers with probability at least whenever , where is a constant that depends on .

### 1.2 Prior art for bilinear inverse problems

Recent approaches to solving bilinear inverse problems like blind deconvolution and phase retrieval have been to lift the problems into a low rank matrix recovery task or to formulate an optimization programs in the natural parameter space. Lifting transforms the problem of recovering and from bilinear measurements to the problem of recovering a low rank matrix from linear measurements. The low rank matrix can then be recovered using a semidefinite program. The result in [3] for blind deconvolution showed that if and are representations of the target signals with respect to Fourier and Gaussian subspaces, respectively, then the lifting method successfully recovers the low rank matrix. The recovery occurs with high probability under near optimal sample complexity. Unfortunately, solving the semidefinite program is prohibitively computationally expensive because they operate in high-dimension space. Also, it is not clear how to enforce additional structure like sparsity of and in the lifted formulation in a way that allows optimal sample complexity [23, 28].

In comparison to the lifting approach for blind deconvolution and phase retrieval, methods that formulate an algorithm in the natural parameter space like alternating minimization and gradient descent based method are computationally efficient and also enjoy rigorous recovery guarantees under optimal or near optimal sample complexity [22, 8, 27, 30]. In fact, the work in [21] for sparse blind deconvolution is based on alternating minimization. In the paper, the authors use an alternating minimization that successively approximate the sparse vectors while enforcing the low rank property of the lifted matrix. However, because these methods are non-convex, convergence to the global optimal requires a good initialization [32, 10, 22].

Other approaches that operate in the natural parameter space include PhaseMax [5, 13] and BranchHull [1]

. PhaseMax is a linear program which has been proven to find the target signal in phase retrieval under optimal sample complexity if a good anchor vector is available. As with alternating minimization and gradient descent based approach, PhaseMax requires a good initialization. However, in PhaseMax the initialization is part of the optimization program but in alternating minimization the initialization is part of the algorithmic implementation. BranchHull is a convex program which solves the BIP described in (

2) excluding the sparsity assumption under optimal sample complexity. Like the -BranchHull presented in this paper, BranchHull does not require an initialization but requires the sign information of the signals.

The -BranchHull program (2) combines strengths of both the lifting method and the gradient descent based method. Specifically, the -BranchHull program is a convex program that operates in the natural parameter space, without a need for an initialization, and without restrictive assumptions on the class of recoverable signals. These strengths are achieved at the cost of the sign information of the target signals and . However, the sign assumption can be justified in imaging applications where the goal might be to recover pixel values of a target image, which are non-negative. Also, as in PhaseMax, the sign information can be thought of as an anchor vector which anchors the solution to one of the branches of the hyperbolic measurements.

### 1.3 Extension to noise and outlier

Extending the theory of the -BranchHull program (2) to the case with noise is important as most real data contain significant noise. Formulation 2 may be particularly susceptible to noise that changes the sign of even a single measurement. For the bilinear inverse problem as described in (1) with small dense noise and arbitrary outliers, we propose the following robust -BranchHull program

 RBH:minimizeh∈RK,m∈RN,ξ∈RL∥h∥1+∥m∥1+λ∥ξ∥1subject to sℓ(c⊺ℓm+ξℓ)b⊺ℓh≥|yℓ|, (3) tℓb⊺ℓh≥0,ℓ=1,…,L.

The slack variable controls the shape of the feasible set. For measurements with incorrect sign, the corresponding slack variables shifts the feasible set so that the target signal is feasible. In the outlier case, the penalty promotes sparsity of slack variable . We implement a slight variation of the above program, detailed in Section 1.4, to remove distortions from real and synthetic images.

### 1.4 Total variation extension of ℓ1-BranchHull

The robust -BranchHull program (3) is flexible and can be altered to remove distortions from an otherwise piecewise constant signal. In the case where is a piecewise constant signal, is a distortion signal and is the distorted signal, the total variation version (4) of the robust BranchHull program (3), under successful recovery, produces the piecewise constant signal , up to a scaling.

 TV BH:minimizeh∈RK,m∈RN,ξ∈RL TV(Bh)+∥m∥1+λ∥ξ∥1subject to sℓ(ξℓ+c⊤ℓm)b⊤ℓh≥|yℓ| (4) tℓb⊤ℓh≥0,ℓ=1,2,…,L.

In (4), is a total variation operator and is the norm of the vector containing pairwise difference of neighboring elements of the target signal . We implement (4) to remove distortions from images in Section 3.2.

### 1.5 Notation

Vectors and matrices are written with boldface, while scalars and entries of vectors are written in plain font. For example, is the the entry of the vector . We write as the vector of all ones with dimensionality appropriate for the context. We write as the identity matrix. For any , let such that . For any matrix , let be the Frobenius norm of . For any vector , let be the number of non-zero entries in . For and , is the corresponding vector in , and .

## 2 Algorithm

In this section, we present an Alternating Direction Method of Multipliers (ADMM) implementation of an extension of the robust -BranchHull program (3). The ADMM implementation of the -BranchHull program (2) is similar to the ADMM implementation of (5) and we leave it to the readers. The extension of the robust -BranchHull program we consider is

 % minimizeh∈RK,m∈RN,ξ∈RL ∥Ph∥1+∥m∥1+λ∥ξ∥1 subject to  sℓ(ξℓ+c⊤ℓm)b⊤ℓh≥|yℓ| (5) tℓb⊤ℓh≥0,ℓ=1,2,…,L,

where for some . The above extension reduces to the robust -BranchHull program if . Recalling that and , we make use of the following notations

 u=⎛⎜⎝xwξ⎞⎟⎠,  v=⎛⎜⎝mhλξ⎞⎟⎠,  E=⎛⎜⎝C000B000λ−1IL⎞⎟⎠%and  Q=⎛⎜⎝IN000P000IL⎞⎟⎠.

Using this notation, our convex program can be compactly written as

 minimizev∈RN+K+L,u∈R3L  ∥Qv∥1  subject to  u=Ev,  u∈C.

Here is the convex feasible set of . Introducing a new variable the resulting convex program can be written as

 % minimizev,u,z  ∥v∥1  subject to  u=Ez,  Qz=v,  u∈C.

We may now form the scaled ADMM steps as follows

 uk+1 =argminu  IC(u)+ρ2∥u+αk−Ezk∥2 (6) vk+1 =argminv  ∥v∥1+ρ2∥v+βk−Qzk∥2 (7) zk+1 =argminz  ρ2∥αk+uk+1−EQz∥2+ρ2∥βk+vk+1−Qz∥2 (8) αk+1 =αk+uk+1−Ezk+1, βk+1 =βk+vk+1−Qzk+1.

where in (6) is the indicator function on such that if and infinity otherwise. We would like to note that the first three steps of the proposed ADMM scheme can be presented in closed form. The update in (6) is the following projection

 uk+1=projC(Ezk−αk),

where is the projection of onto . Details of computing the projection onto C are presented in the Supplementary material. The update in (7) can be written in terms of the soft-thresholding operator

 vk+1=S1/ρ(Qzk−βk),where(Sc(v))i=⎧⎪⎨⎪⎩vi−cvi>c0|vi|≤cvi+cvi<−c,

where and is the th entry of . Finally, the update in (8) takes the following form

 zk+1=(E⊤E+Q⊺Q)−1(E⊤(αk+uk+1)+Q⊺(βk+vk+1)).

In our implementation of the ADMM scheme, we initialize the algorithm with the , , .

## 3 Numerical Experiments

In this section, we provide numerical experiments on synthetic and real data. The synthetic experiment numerically verifies Theorem 1 with a low scaling constant. The experiment on real data shows total variation -BranchHull program can be used to remove distortions from an image.

### 3.1 Phase Portrait

We first show a phase portrait that verifies Theorem 1. Consider the following measurements: fix , and let . Let the target signal be such that both and have non-zero entries with the nonzero indices randomly selected and set to . Let and be the number of nonzero entries in and , respectively. Let and such that and . Lastly, let and .

Figure 4 shows the fraction of successful recoveries from 10 independent trials using (2) for the bilinear inverse problem (1) from data as described above. Let be the output of (2) and let be the candidate minimizer. We solve (2) using an ADMM implementation similar to the ADMM implementation detailed in Section 2 with the step size parameter . For each trial, we say (2) successfully recovers the target signal if . Black squares correspond to no successful recovery and white squares correspond to 100% successful recovery. The line corresponds to with and indicates that the sample complexity constant in Theorem 1 is not very large.

### 3.2 Distortion removal from images

We use the total variation BranchHull program (4) to remove distortions from real images . In the experiments, The observation is the column-wise vectorization of the image , the target signal is the vectorization of the piecewise constant image and corresponds to the distortions in the image. We use (4) to recover piecewise constant target images like in the foreground of Figure 4(a) with , where in block form. Here, and with

Lastly, we solve (4) using the ADMM algorithm detailed in Section 2 with .

We now show two experiments on real images. The first image, shown in Figure 4(a), was captured using a camera and resized to a image. The measurement is the vectorization of the image with . Let be the identity matrix. Let be the inverse DCT matrix. Let with the first column set to and remaining columns randomly selected from columns of without replacement. The matrix is scaled so that . The vector of known sign is set to . Let be the output of (4) with and . Figure 4(b) corresponds to and shows that the object in the center was successfully recovered.

The second real image, shown in Figure 4(c), is an image of rice grains. The size of the image is . The measurement is the vectorization of the image with . Let be the identity matrix. Let with the first column set to . The remaining columns of are sampled from Bessel function of the first kind with each column corresponding to a fixed . Specifically, let with . For each remaining column of , let and . The matrix is scaled so that . The vector of known sign is set to . Let be the output of (4) with and . Figure 4(d) corresponds .

## 4 Proof Outline

In this section, we provide a proof of Theorem 1 by considering a related linear program with larger feasible set. Let with and . Let , and . Also, let and . We will shows that the (2) recovers such that .

Consider program (9) which has a linear constraint set that contains the feasible set of the -BrachHull program (2).

 LP:minimizeh∈RK,m∈RN ∥h∥1+∥m∥1subject to sℓ(b⊺ℓhc⊺ℓ~m+b⊺ℓ~hc⊺ℓm)≥2|yℓ| (9) ℓ=1,2,…,L,

Let

 S:={(h,m)∈RK×RN | (h,m)=α(−~h,~m), and α∈[−1,1]}. (10)

Observe that if is a minimizer of (9) then so are all the points .

###### Lemma 1.

If the optimization program (9) recovers , then the BranchHull program (2) recovers .

A proof of Lemma 1, provided in Supplementary material, follows from the observations that the feasible set of (9) contains the feasible set of (2) and is the only feasible point in (2) among all .

We now show that the only minimizes (9). Let denote the th row of a matrix . The linear constraint in (9) are now simply . Note that .

Our strategy will be to show that for any feasible perturbation the objective of the linear program (9) strictly increases, where is the orthogonal complement of the subspace . This will be equivalent to showing that the solution of (9) lies in the set .

The subgradient of the -norm at the proposed solution is

 ∂∥(~h,~m)∥1:={g∈RK+N:∥g∥∞≤1 and gΓh=sign(h♮Γh) ,gΓm=sign(m♮Γm)},

where and denote the support of non-zeros in , and , respectively. To show the linear program converges to a solution , it suffices to show that the set of following descent directions

 {(δh,δm)∈N⊥:⟨g,(δh,δm)⟩≤0, ∀g∈∂∥(~h,~m)∥1} ⊆{(δh,δm)∈N⊥:⟨gΓh,δhΓh⟩+⟨gΓm,δmΓm⟩+∥(δhΓch,δmΓcm)∥1≤0} ⊆{(δh,δm)∈N⊥:−∥gΓh∪Γm∥2∥(δhΓh,δmΓm)∥2+∥(δhΓch,δmΓcm)∥1≤0} ={(δh,δm)∈N⊥:∥(δhΓch,δmΓcm)∥1≤√S1+S2∥(δhΓh,δmΓm)∥2}=:D (11)

does not contain any vector that is consistent with the constraints. We do this by quantifying the “width" of the set through a Rademacher complexity, and a probability the gradients of the constraint functions lie in a certain half space. This allows us to use small ball method [15, 26] to ultimately show that it is highly unlikely to have descent directions in that meet the constraints in (9). We now concretely state the definitions of the Rademacher complexity, and probability term mentioned above.

Define linear functions

 fℓ(h,m):=⟨(b⊺ℓ~hcℓ,c⊺ℓ~mbℓ),(h,m)⟩,ℓ=1,2,3,…,L.

The linear constraints in the LP (9) can be expressed as . The gradients of w.r.t. are then simply . Define the Rademacher complexity of a set as

 C(D):=Esup(h,m)∈D1√LL∑ℓ=1εℓ⟨∇fℓ,(h,m)∥(h,m)∥2⟩, (12)

where

are iid Rademacher random variables independent of everything else. For a set

, the quantity is a measure of width of around the origin interms of the gradients of the constraint functions. For example, an equally distributed random set of gradient functions might lead to a smaller value of .

Our results also depend on a probability , and a positive parameter introduced below

 pτ(D)=inf(h,m)∈DP(⟨∇fℓ,(h,m)∥(h,m)∥2⟩≥τ). (13)

Intuitively, quantifies the size of through the gradient vectors. For a small enough fixed parameter, a small value of means that the is mainly invisible to to the gradient vectors.

###### Lemma 2.

Let be the set of descent directions, already characterized in (4), for which , and can be determined using (12), and (13). Choose for any . Then the solution of the LP in (9) lies in the set111 For a set , and a vector , we define by , a set obtained by incrementing every element of by . with probability at least .

Proof of this lemma is based on small ball method developed in [15, 26] and further studied in [18, 17]. The proof is mainly repeated using the argument in [6], and is provided in the supplementary material for completeness. We now state the main theorem for linear program (9). The theorems states that if , then the minimizer of the linear program (9) is in the set with high probability.

###### Theorem 2 (Exact recovery).

Suppose we observe pointwise product of two vectors , and through a bilinear measurement model in (1), where , and are standard Gaussian random matrices. Then the linear program (9) recovers with probability at least whenever , where is a constant that depends on .

In light of Lemma 2, the proof of Theorem 2 reduces to computing the Rademacher complexity defined in (12

), and the tail probability estimate

defined in (13) of the set of descent directions defined in (4). These quantities are computed in the Supplementary material. The proof of Theorem 1 follows by applying Lemma 1 to Theorem 2.

## Appendix A Supplementary material

### a.1 Proof of Lemma 1:

We first show that the feasible set of (2) is contained in the feasible set of (9). We do this by using the fact that a convex set with a smooth boundary is contained in the halfspace defined by the tangent hyperplane at any point of the boundary of the convex set. Note that (2) is equivalent to the formulation

 minimizeh∈RK,m∈RN ∥h∥1+∥m∥1 subject to yℓb⊺ℓhc⊺ℓm≥y2ℓ tℓb⊺ℓh≥0, ℓ=1,…,L.

Consider a point on the boundary of the convex set defined by the constraints above and observe that

 {(wℓ,xℓ)∈R2∣∣∣yℓwℓxℓ≥y2ℓsign(wℓ)=tℓ}⊆{(wℓ,xℓ)∈R2∣∣∣(yℓ~xℓyℓ~wℓ)⋅(wℓ−~wℓxℓ−~xℓ)≥0}. (14)

Plugging in and , we have that any feasible satisfies

 yℓc⊺ℓ~mb⊺ℓh+yℓb⊺ℓ~hc⊺ℓm≥2y2ℓ,ℓ=1,…,L,

which implies for all . So, the feasible set of (9) contains the feasible set of (2).

Lastly, note that among all points , only is feasible in (2). So, if solves (9) then solves (2).∎

### a.2 Proof of Lemma 2:

Define a one-sided loss function:

where denotes the positive side. The LP in (9) can now be equivalently expressed as

 (^h,^m):=argmin(h,m)∈RK+N ∥h∥1+∥m∥1  subject to  L(h,m)≤0. (15)

We want to show that there is no feasible descent direction around the true solution . Since is a feasible perturbation from the proposed optimal , we have from (15)

 L(~h+δh,~m+δm)≤0. (16)

We begin by expanding the loss function below

 L(~h+δh,~m+δm) =1LL∑ℓ=1[sℓ(2yℓ−b⊺ℓ~hc⊺ℓ(~m+δm)−c⊺ℓ~mb⊺ℓ(~h+δh)]+ ≥1LL∑ℓ=1[−sℓb⊺ℓ~hc⊺ℓδm−sℓc⊺ℓ~mb⊺ℓδh]+. (17)

Let . Using the fact that , and that for every , and , , we have

 1LL∑ℓ=1[−sℓb⊺ℓ~hc⊺ℓδm−sℓc⊺ℓ~mb⊺ℓδh]+≥1LL∑ℓ=1ψτ∥(δh,δm)∥2(−sℓb⊺ℓ~hc⊺ℓδm−sℓc⊺ℓ~mb⊺ℓδh) =∥(δh,δm)∥2⋅1LL∑ℓ=1ψτ(−sℓ⟨(c⊺ℓ~mbℓ,b⊺ℓ~hcℓ),(δh,δm)∥(δh,δm)∥2⟩