We study the problem of recovering two unknown signals and in from observations , where is a bilinear operator. Let and such that and with and . Let the bilinear operator satisfy
where denotes entrywise product. The bilinear inverse problem (BIP) we consider is to find and from , , and , up to the inherent scaling ambiguity.
BIPs, in general, have many applications in signal processing and machine learning and include fundamental practical problems like phase retrieval[12, 7, 9], blind deconvolution [3, 29, 16, 2], non-negative matrix factorization [14, 20], self-calibration , blind source separation , dictionary learning , etc. These problems are in general challenging and suffer from identifiability issues that make the solution set non-unique and non-convex. A common identifiability issue, also shared by the BIP in (1), is the scaling ambiguity. In particular, if solves a BIP, then so does for any nonzero . In this paper, we resolve this scaling ambiguity by finding the point in the solution set closest to the origin with respect to the norm.
Another identifiability issue of the BIP in (1) is if solves (1), then so does , where is the vector of ones. Unlike prior studies like , where the signals are assumed to live in known subspaces, we resolve this structural ambiguity by additionally assuming the signals are sparse with respect to those known basis. Natural choices for such bases include the standard basis, the Discrete Cosine Transform (DCT) basis, and a wavelet basis.
Recent work on sparse BIP, specifically sparse blind deconvolution, in  provides an exact recovery guarantee of the sparse vectors and that satisfy a "peakiness" condition, i.e. for some absolute constant
. This result holds with high probability for random measurements if the number of measurement, up to a log factor, satisfy. For general vectors without the peakiness condition, the same work shows exact recover is possible if the number of measurements, up to a log factor, satisfy .
The main contribution of this paper is to introduce an algorithm for the sparse BIP described in (1) that recovers the sparse vectors, which does not have to satisfy the peakiness condition and does not need initialization, under near optimal sample complexity. Precisely, we present a convex program stated in the natural parameter space, which in the noiseless setting with random and , exactly recovers the sparse vectors with at most combined nonzero entries with high probability if the number measurements satisfy .
1.1 Convex program and main results
We introduce a convex program written in the natural parameter space for the bilinear inverse problem described in (1). Let with and . Let , and , where and are the th row of and . Also, let and . The convex program we consider to recover is the -BranchHull program
The motivation for the feasible set in program (2) follows from the observation that each measurement defines a hyperbola in . As shown in Figure (3), the sign information restricts to one of the branch of the hyperbola. The feasible set in (2) corresponds to the convex hull of particular branches of the hyperbola for each . This also implies that the feasible set is convex as it is the intersection of convex sets.
The objective function in (2) is an minimization over that finds a sparse point with . This scaling in the minimizer of (2) is justified by the observation that is feasible for any non-zero . So, the minimizer of (2), under successful recovery, is .
Our main result is that under the structural assumptions that and live in random subspaces with and containing at most and non zero entries, the -BranchHull program (2) recovers , and (to within the scaling ambiguity) with high probability, provided the number of measurements, up to log factors, satisfy .
1.2 Prior art for bilinear inverse problems
Recent approaches to solving bilinear inverse problems like blind deconvolution and phase retrieval have been to lift the problems into a low rank matrix recovery task or to formulate an optimization programs in the natural parameter space. Lifting transforms the problem of recovering and from bilinear measurements to the problem of recovering a low rank matrix from linear measurements. The low rank matrix can then be recovered using a semidefinite program. The result in  for blind deconvolution showed that if and are representations of the target signals with respect to Fourier and Gaussian subspaces, respectively, then the lifting method successfully recovers the low rank matrix. The recovery occurs with high probability under near optimal sample complexity. Unfortunately, solving the semidefinite program is prohibitively computationally expensive because they operate in high-dimension space. Also, it is not clear how to enforce additional structure like sparsity of and in the lifted formulation in a way that allows optimal sample complexity [23, 28].
In comparison to the lifting approach for blind deconvolution and phase retrieval, methods that formulate an algorithm in the natural parameter space like alternating minimization and gradient descent based method are computationally efficient and also enjoy rigorous recovery guarantees under optimal or near optimal sample complexity [22, 8, 27, 30]. In fact, the work in  for sparse blind deconvolution is based on alternating minimization. In the paper, the authors use an alternating minimization that successively approximate the sparse vectors while enforcing the low rank property of the lifted matrix. However, because these methods are non-convex, convergence to the global optimal requires a good initialization [32, 10, 22].
. PhaseMax is a linear program which has been proven to find the target signal in phase retrieval under optimal sample complexity if a good anchor vector is available. As with alternating minimization and gradient descent based approach, PhaseMax requires a good initialization. However, in PhaseMax the initialization is part of the optimization program but in alternating minimization the initialization is part of the algorithmic implementation. BranchHull is a convex program which solves the BIP described in (2) excluding the sparsity assumption under optimal sample complexity. Like the -BranchHull presented in this paper, BranchHull does not require an initialization but requires the sign information of the signals.
The -BranchHull program (2) combines strengths of both the lifting method and the gradient descent based method. Specifically, the -BranchHull program is a convex program that operates in the natural parameter space, without a need for an initialization, and without restrictive assumptions on the class of recoverable signals. These strengths are achieved at the cost of the sign information of the target signals and . However, the sign assumption can be justified in imaging applications where the goal might be to recover pixel values of a target image, which are non-negative. Also, as in PhaseMax, the sign information can be thought of as an anchor vector which anchors the solution to one of the branches of the hyperbolic measurements.
1.3 Extension to noise and outlier
Extending the theory of the -BranchHull program (2) to the case with noise is important as most real data contain significant noise. Formulation 2 may be particularly susceptible to noise that changes the sign of even a single measurement. For the bilinear inverse problem as described in (1) with small dense noise and arbitrary outliers, we propose the following robust -BranchHull program
The slack variable controls the shape of the feasible set. For measurements with incorrect sign, the corresponding slack variables shifts the feasible set so that the target signal is feasible. In the outlier case, the penalty promotes sparsity of slack variable . We implement a slight variation of the above program, detailed in Section 1.4, to remove distortions from real and synthetic images.
1.4 Total variation extension of -BranchHull
The robust -BranchHull program (3) is flexible and can be altered to remove distortions from an otherwise piecewise constant signal. In the case where is a piecewise constant signal, is a distortion signal and is the distorted signal, the total variation version (4) of the robust BranchHull program (3), under successful recovery, produces the piecewise constant signal , up to a scaling.
In (4), is a total variation operator and is the norm of the vector containing pairwise difference of neighboring elements of the target signal . We implement (4) to remove distortions from images in Section 3.2.
Vectors and matrices are written with boldface, while scalars and entries of vectors are written in plain font. For example, is the the entry of the vector . We write as the vector of all ones with dimensionality appropriate for the context. We write as the identity matrix. For any , let such that . For any matrix , let be the Frobenius norm of . For any vector , let be the number of non-zero entries in . For and , is the corresponding vector in , and .
In this section, we present an Alternating Direction Method of Multipliers (ADMM) implementation of an extension of the robust -BranchHull program (3). The ADMM implementation of the -BranchHull program (2) is similar to the ADMM implementation of (5) and we leave it to the readers. The extension of the robust -BranchHull program we consider is
where for some . The above extension reduces to the robust -BranchHull program if . Recalling that and , we make use of the following notations
Using this notation, our convex program can be compactly written as
Here is the convex feasible set of . Introducing a new variable the resulting convex program can be written as
We may now form the scaled ADMM steps as follows
where in (6) is the indicator function on such that if and infinity otherwise. We would like to note that the first three steps of the proposed ADMM scheme can be presented in closed form. The update in (6) is the following projection
where is the projection of onto . Details of computing the projection onto C are presented in the Supplementary material. The update in (7) can be written in terms of the soft-thresholding operator
where and is the th entry of . Finally, the update in (8) takes the following form
In our implementation of the ADMM scheme, we initialize the algorithm with the , , .
3 Numerical Experiments
In this section, we provide numerical experiments on synthetic and real data. The synthetic experiment numerically verifies Theorem 1 with a low scaling constant. The experiment on real data shows total variation -BranchHull program can be used to remove distortions from an image.
3.1 Phase Portrait
We first show a phase portrait that verifies Theorem 1. Consider the following measurements: fix , and let . Let the target signal be such that both and have non-zero entries with the nonzero indices randomly selected and set to . Let and be the number of nonzero entries in and , respectively. Let and such that and . Lastly, let and .
Figure 4 shows the fraction of successful recoveries from 10 independent trials using (2) for the bilinear inverse problem (1) from data as described above. Let be the output of (2) and let be the candidate minimizer. We solve (2) using an ADMM implementation similar to the ADMM implementation detailed in Section 2 with the step size parameter . For each trial, we say (2) successfully recovers the target signal if . Black squares correspond to no successful recovery and white squares correspond to 100% successful recovery. The line corresponds to with and indicates that the sample complexity constant in Theorem 1 is not very large.
3.2 Distortion removal from images
We use the total variation BranchHull program (4) to remove distortions from real images . In the experiments, The observation is the column-wise vectorization of the image , the target signal is the vectorization of the piecewise constant image and corresponds to the distortions in the image. We use (4) to recover piecewise constant target images like in the foreground of Figure 4(a) with , where in block form. Here, and with
We now show two experiments on real images. The first image, shown in Figure 4(a), was captured using a camera and resized to a image. The measurement is the vectorization of the image with . Let be the identity matrix. Let be the inverse DCT matrix. Let with the first column set to and remaining columns randomly selected from columns of without replacement. The matrix is scaled so that . The vector of known sign is set to . Let be the output of (4) with and . Figure 4(b) corresponds to and shows that the object in the center was successfully recovered.
The second real image, shown in Figure 4(c), is an image of rice grains. The size of the image is . The measurement is the vectorization of the image with . Let be the identity matrix. Let with the first column set to . The remaining columns of are sampled from Bessel function of the first kind with each column corresponding to a fixed . Specifically, let with . For each remaining column of , let and . The matrix is scaled so that . The vector of known sign is set to . Let be the output of (4) with and . Figure 4(d) corresponds .
4 Proof Outline
In this section, we provide a proof of Theorem 1 by considering a related linear program with larger feasible set. Let with and . Let , and . Also, let and . We will shows that the (2) recovers such that .
Observe that if is a minimizer of (9) then so are all the points .
Our strategy will be to show that for any feasible perturbation the objective of the linear program (9) strictly increases, where is the orthogonal complement of the subspace . This will be equivalent to showing that the solution of (9) lies in the set .
The subgradient of the -norm at the proposed solution is
where and denote the support of non-zeros in , and , respectively. To show the linear program converges to a solution , it suffices to show that the set of following descent directions
does not contain any vector that is consistent with the constraints. We do this by quantifying the “width" of the set through a Rademacher complexity, and a probability the gradients of the constraint functions lie in a certain half space. This allows us to use small ball method [15, 26] to ultimately show that it is highly unlikely to have descent directions in that meet the constraints in (9). We now concretely state the definitions of the Rademacher complexity, and probability term mentioned above.
Define linear functions
The linear constraints in the LP (9) can be expressed as . The gradients of w.r.t. are then simply . Define the Rademacher complexity of a set as
are iid Rademacher random variables independent of everything else. For a set, the quantity is a measure of width of around the origin interms of the gradients of the constraint functions. For example, an equally distributed random set of gradient functions might lead to a smaller value of .
Our results also depend on a probability , and a positive parameter introduced below
Intuitively, quantifies the size of through the gradient vectors. For a small enough fixed parameter, a small value of means that the is mainly invisible to to the gradient vectors.
Let be the set of descent directions, already characterized in (4), for which , and can be determined using (12), and (13). Choose for any . Then the solution of the LP in (9) lies in the set111 For a set , and a vector , we define by , a set obtained by incrementing every element of by . with probability at least .
Proof of this lemma is based on small ball method developed in [15, 26] and further studied in [18, 17]. The proof is mainly repeated using the argument in , and is provided in the supplementary material for completeness. We now state the main theorem for linear program (9). The theorems states that if , then the minimizer of the linear program (9) is in the set with high probability.
Theorem 2 (Exact recovery).
), and the tail probability estimatedefined in (13) of the set of descent directions defined in (4). These quantities are computed in the Supplementary material. The proof of Theorem 1 follows by applying Lemma 1 to Theorem 2.
-  A. Aghasi, A. Ahmed, and P. Hand. Branchhull: Convex bilinear inversion from the entrywise product of signals with known signs. arXiv preprint arXiv:1312.0525v2, 2016.
-  A. Aghasi, B. Heshmat, A. Redo-Sanchez, J. Romberg, and R. Raskar. Sweep distortion removal from terahertz images via blind demodulation. Optica, 3(7):754–762, 2016.
-  A. Ahmed, B. Recht, and J. Romberg. Blind deconvolution using convex programming. IEEE Trans. Inform. Theory, 60(3):1711–1732, 2014.
-  M. G. Akritas, S. Lahiri, and D. N. Politis. Topics in nonparametric statistics. Springer, 2016.
-  S. Bahmani and J. Romberg. Phase retrieval meets statistical learning theory: A flexible convex relaxation. arXiv preprint arXiv:1610.04210, 2016.
-  S. Bahmani and J. Romberg. Anchored regression: Solving random convex equations via convex programming. arXiv preprint arXiv:1702.05327, 2017.
-  E. Candès and X. Li. Solving quadratic equations via phaselift when there are about as many equations as unknowns. Found. Comput. Math., pages 1–10, 2012.
-  E. Candès, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Trans. Inform. Theory, 61(4):1985–2007, 2015.
-  E. Candès, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming. Commun. Pure Appl. Math., 66(8):1241–1274, 2013.
-  Y. Chen and E. Candes. Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Advances Neural Inform. Process. Syst., pages 739–747, 2015.
-  O. P. D., P. B. A., and R. S. T. Survey of sparse and non-sparse methods in source separation. International Journal of Imaging Systems and Technology, 15(1):18–33, 2005.
-  J. R. Fienup. Phase retrieval algorithms: a comparison. Applied optics, 21(15):2758–2769, 1982.
-  T. Goldstein and C. Studer. Phasemax: Convex phase retrieval via basis pursuit. arXiv preprint arXiv:1610.07531, 2016.
-  P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of machine learning research, 5(Nov):1457–1469, 2004.
-  V. Koltchinskii and S. Mendelson. Int. Math. Research Notices, 2015(23):12991–13008, 2015.
-  D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE signal processing magazine, 13(3):43–64, 1996.
-  G. Lecué and S. Mendelson. Regularization and the small-ball method ii: complexity dependent error rates. The Journal of Machine Learning Research, 18(1):5356–5403, 2017.
-  G. Lecué, S. Mendelson, et al. Regularization and the small-ball method i: sparse recovery. The Annals of Statistics, 46(2):611–641, 2018.
-  M. Ledoux and M. Talagrand. Probability in Banach Spaces: isoperimetry and processes. Springer Science & Business Media, 2013.
-  D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In Advances in neural information processing systems, pages 556–562, 2001.
-  K. Lee, Y. Wu, and Y. Bresler. Near optimal compressed sensing of a class of sparse low-rank matrices via sparse power factorization. arXiv preprint arXiv:1702.04342, 2017.
-  X. Li, S. Ling, T. Strohmer, and K. Wei. Rapid, robust, and reliable blind deconvolution via nonconvex optimization. arXiv preprint arXiv:1606.04933, 2016.
-  X. Li and V. Voroninski. Sparse signal recovery from quadratic measurements via convex programming. SIAM Journal on Mathematical Analysis, 45(5):3019–3033, 2013.
-  S. Ling and T. Strohmer. Self-calibration and biconvex compressive sensing. Inverse Problems, 31(11):115002, 2015.
-  C. McDiarmid. On the method of bounded differences. Surveys in combinatorics, 141(1):148–188, 1989.
-  S. Mendelson. Learning without concentration. In Conference on Learning Theory, pages 25–39, 2014.
-  P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In Advances Neural Inform. Process. Syst., pages 2796–2804, 2013.
-  S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Hassibi. Simultaneously structured models with application to sparse and low-rank matrices. IEEE Trans. Inform. Theory, 61(5):2886–2908, 2015.
-  T. G. Stockham, T. M. Cannon, and R. B. Ingebretsen. Blind deconvolution through digital signal processing. Proceedings of the IEEE, 63(4):678–692, 1975.
-  J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. In Information Theory (ISIT), 2016 IEEE International Symposium on, pages 2379–2383. IEEE, 2016.
-  I. Tosic and P. Frossard. Dictionary learning. IEEE Signal Processing Magazine, 28(2):27–38, 2011.
-  S. Tu, R. Boczar, M. Simchowitz, M. Soltanolkotabi, and B. Recht. Low-rank solutions of linear matrix equations via procrustes flow. arXiv preprint arXiv:1507.03566, 2015.
-  S. van de Geer and J. Lederer. The bernstein–orlicz norm and deviation inequalities. Probability theory and related fields, 157(1-2):225–250, 2013.
-  A. W. van der Vaart and J. A. Wellner. Weak convergence and empirical processes with applications to statistics. Journal of the Royal Statistical Society-Series A Statistics in Society, 160(3):596–608, 1997.
Appendix A Supplementary material
a.1 Proof of Lemma 1:
We first show that the feasible set of (2) is contained in the feasible set of (9). We do this by using the fact that a convex set with a smooth boundary is contained in the halfspace defined by the tangent hyperplane at any point of the boundary of the convex set. Note that (2) is equivalent to the formulation
Consider a point on the boundary of the convex set defined by the constraints above and observe that
Plugging in and , we have that any feasible satisfies
a.2 Proof of Lemma 2:
Define a one-sided loss function:
where denotes the positive side. The LP in (9) can now be equivalently expressed as
We want to show that there is no feasible descent direction around the true solution . Since is a feasible perturbation from the proposed optimal , we have from (15)
We begin by expanding the loss function below
Let . Using the fact that , and that for every , and , , we have