Low-Rank Phase Retrieval via Variational Bayesian Learning

11/05/2018 ∙ by Kaihui Liu, et al. ∙ 0

In this paper, we consider the problem of low-rank phase retrieval whose objective is to estimate a complex low-rank matrix from magnitude-only measurements. We propose a hierarchical prior model for low-rank phase retrieval, in which a Gaussian-Wishart hierarchical prior is placed on the underlying low-rank matrix to promote the low-rankness of the matrix. Based on the proposed hierarchical model, a variational expectation-maximization (EM) algorithm is developed. The proposed method is less sensitive to the choice of the initialization point and works well with random initialization. Simulation results are provided to illustrate the effectiveness of the proposed algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Phase retrieval has attracted a lot of interest for its practical significance in many applications, including X-ray crystallography [1, 2], optics [3], astronomy [4], diffraction imaging [5], acoustics [6], quantum mechanics [7] and quantum information [8], etc. In these applications, the phase information of measurements is difficult to obtain or completely missing, and one hope to recover the unknown signal from only the intensity of the measurements. The problem of recovering a complex signal from magnitude-only measurements can be cast as

(1)

where is a known complex measurement matrix.

A variety of phase retrieval algorithms have been proposed over the past few years. One of the earliest works on phase retrieval is the Gerchberg-Saxton algorithm [9], which is based on alternating projection and iterates between the unknown phases of the measurements and the unknown signals. In [10], an alternating minimization technique with spectral initialization was developed. Recently, a convex relaxation-based method which enjoys a nice theoretical guarantee, named PhaseLift, was proposed for phase retrieval in [11]. PhaseLift transforms phase retrieval into a semidefinite program via convex relaxation. But, in the meanwhile, the signal of interest is lifted to a high dimensional space and thus the PhaseLift incurs a higher computational complexity. To avoid this drawback, nonconvex phase retrieval algorithms that directly work on the original signal space were developed, e.g. alternating minimization algorithm [10], Wirtinger flow algorithm [12] and its variants [13, 14, 15, 16, 17, 18, 19, 20], Almost all of these nonconvex methods require a sufficiently accurate initial point to guarantee to find a globally optimal solution. One exception is the gradient descent method with random initialization proposed in [21], which works as long as the sample complexity exceeds . More recently, a non-lifting convex relaxation-based method, called PhaseMax [22, 23, 24] was proposed for phase retrieval, which relaxed the nonconvex equality constraints in (1

) to convex inequality constraints, and solved a linear program in the natural parameter space.

In recent years, signals with more complex structures have been considered in phase retrieval problems. For example, in many applications, the signal of interest is sparse or approximately sparse, and this sparse structure can be utilized to significantly reduce the sample complexity for exact phase recovery [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]

. In some other applications, one may collect intensity measurements of a time-varying signal at different time instants, thus resulting in a multiple measurement vector (MMV) model. Due to the temporal correlation between different time instants, the signal of interest in a matrix form usually exhibits a low-rank structure

[36]. Such a low-rank phase retrieval problem was firstly introduced in [36], where an alternating minimization method was developed. In this paper, we address the low-rank phase retrieval problem from a Bayesian perspective. In particular, we propose a hierarchical prior model for low-rank phase retrieval, in which a Gaussian-Wishart hierarchical prior originally introduced in [37] is employed to promote the low-rankness of the matrix. Based on the hierarchical Bayesian model, a variational expectation-maximization (EM) algorithm is developed with phase information treated as unknown deterministic parameters. The proposed method is The proposed method is less sensitive to the choice of the initialization point and works well with random initialization. Simulation results show that our method is capable of obtaining efficient and robust results.

The rest of the paper is organized as follows. In Section II, a hierarchical Gaussian prior model for low-rank phase retrieval is introduced. Based on this hierarchical model, a variational Bayesian learning method is developed in Section III. Simulation results are provided in Section IV, followed by concluding remarks in Section V.

Ii Bayesian Modeling

We consider a low-rank phase retrieval problem which was firstly introduced in [36]. Suppose we have a low-rank matrix with its rank . For each column of the low-rank matrix, we can collect non-linear intensity measurements of the form

(2)

where is the measurement vector, and denotes the additive noise. The problem of interest is to recover the low-rank matrix from the phaseless measurements . Since we only have the magnitude measurements, each column of the low-rank matrix, , can only be recovered up to a phase ambiguity. Naturally, this low-rank phase retrieval problem can be formulated as

s.t. (3)

where , , and . The optimization (3), however, is difficult to solve due to its non-convexity. In this paper, we consider modeling the low-rank phase retrieval problem within a Bayesian framework. To facilitate our modeling, similar to [38], we introduce a set of deterministic parameters , where represents the missing phase of the observation . Therefore we can write

(4)

and the measurements in a vector form can be written as

(5)

where

(6)

We assume entries of

are independent and identically distributed (i.i.d.) random variables following a Gaussian distribution with zero mean and variance

. To learn

, we place a Gamma hyperprior over

, i.e.

(7)

where is the Gamma function. The parameters and are set to be small values, e.g.

, which makes the Gamma distribution a non-informative prior.

To promote a low-rank solution of , we place a two-layer hierarchical Gaussian prior [37] on . It was shown in [37] this two-layer hierarchical Gaussian prior model has the potential to encourage a low-rank solution. Specifically, in the first layer, the columns of are assumed mutually independent and follow a common Gaussian distribution:

(8)

where is the precision matrix. The second layer specifies a Wishart distribution as a hyperprior over the precision matrix :

(9)

where and

denote the degrees of freedom and the scale matrix of the Wishart distribution, respectively. For the standard Wishart distribution,

is an integer which satisfies . Nevertheless, if an improper prior111

In Bayesian inference, improper prior distributes can often be used provided that the corresponding posterior distribution can be correctly normalized

[39].
is allowed [40], the choice of can be more flexible and relaxed as .

For clarity, we plot the graphical model for our low-rank phase-retrieval problem in Fig. 1.

Fig. 1:

Graphical model for proposed Bayesian low-rank phase-retrieval method, in which double circles denote the observable variable, single circles denote the hidden variable, and the boxes denote deterministic hyperparameters, and the diamonds denote pre-specified hyperparameters

Iii Variational Bayesian Inference

Iii-a Review of The Variational Expectation Maximization Methodology

We first provide a brief review of the variational expectation maximization methodology (VEM). More details about the VEM can be found in [41]. In a probabilistic model, let , and denote the observed data, the hidden variables and the deterministic parameters, respectively. It is well-known that we can decompose the log-likelihood function into sum of two following terms [41]

(10)

where

(11)

and

(12)

where

is an arbitrary probability density function,

is the Kullback-Leibler divergence between

and . Since , is a lower bound on . It is usually difficult to directly maximize the log-likelihood function . To address this difficulty, we, alternatively, search for by maximizing its lower bound . This naturally leads to a variational EM methodology, which alternatively maximizes with respect to and . First, given the current estimate of , we update via maximizing , which is referred to as the maximization step. Then, given the current estimate of , we maximize with respect to , which is referred to as the expectation step. Specifically, to facilitate the optimization of , a factorized form of is assumed [41], i.e. . The maximization of with respect to can be conducted in an alternating fashion for each latent variable, which yields [41]

(13)

where denotes an expectation with respect to the distributions for all .

Iii-B Proposed Algorithm

We now proceed to perform Bayesian inference for our problem. Let denote the hidden variables, and denote the unknown deterministic parameters in our graphical model. Our objective is to obtain an estimate of , along with the approximate posterior distribution for . We assume that has a factorized form over the hidden variables , i.e.

(14)

As discussed in the previous subsection, the maximization of the lower bound can be conducted in an alternating fashion, which leads to

E-step:

M-step:

where denotes the expectation with respect to the distributions . Details of the Bayesian inference are provided next.

E-Step: Update of : Since columns of are mutually independent, the approximate posterior distribution of each column can be calculated independently as

(15)

From (15), it can be seen that follows a Gaussian distribution

(16)

with its mean and covariance matrix given as

(17)
(18)

E-Step: Update of : The approximate posterior can be obtained as

(19)

From the above equation, we see that the posterior of follows a Wishart distribution, that is

(20)

where

(21)
(22)

E-Step: Update of : The variational optimization of yields

(23)

It is easy to verify that follows a Gamma distribution

(24)

where

(25)
(26)

M-Step: Update of : We conduct the partial derivative of with respect to , which yields

(27)

Setting the above partial derivative equal to zero, we have

(28)

Then an estimate of can be obtained

(29)

where , and denote the imaginary and real parts of , respectively.

For clarity, we summarize our algorithm as follows.

Algorithm 1: LRPR-Variational EM
while not converge do
E-step:
 Update via (16) with and fixed;
 Update via (20) with and fixed;
 Update via (24) with and fixed;
M-step:
 Estimate via (28).
end while
(a)
(b)
(c)
Fig. 2: Success rates of respective algorithms vs. the rank of the low-rank matrix .
(a)
(b)
(c)
Fig. 3: Success rates of respective algorithms vs. the number of measurements .

Iii-C Computational Complexity

The proposed algorithm requires to calculate the inverse of a matrix, i.e. (18), at each iteration. Its computational complexity scales cubically in the dimensional of the matrix, which limits its application to large-scale problems. To address this issue, a conjugate gradient method [42] can be used to significantly speed-up the numerical solution of the linear system, which can be straightforwardly applied to solve (18). Besides, a recent work studies the low complexity sparse Bayesian learning algorithm [43], where the expectation step is replaced by an approximate message passing algorithm [44, 45] to reduce the computation complexity. In [37], an inverse-free Bayesian algorithm was developed via relaxed evidence lower bound maximization. The idea can also be applied to our problem to circumvent the cumbersome matrix inversion operations.

Iv Simulation Results

In this section, simulation results are provided to illustrate the effectiveness of our proposed method (referred to as LRPR-VBL). The parameters used in our method are set to , , as suggested in [37]. We compare our method with the state-of-the-art low-rank phase retrieval algorithm (referred to as LRPR-AM) [36] which performs the low-rank phase retrieval via an alternating minimization strategy. To provide a fair comparison, both methods employ a same spectral initialization scheme which was introduced in [36].

We first examine the performance of respective algorithms under different choices of rank . We randomly generate a rank- matrix of size by multiplying two matrices and . The entries of and

are sampled from a circularly-symmetric complex normal distribution. A set of measurement matrices

are produced with each of them is of size . Entries of the measurement matrices are independent and follow a circularly-symmetric complex normal distribution. Fig. 2 plots the success rates vs. the rank of the low-rank matrix , where we set , and , respectively. Results are averaged over 100 independent trails. A trail is considered to be successful if the relative error of the recovery is less than 0.1, and we defined the relative error as

(30)

From Fig. 2, we see that our proposed method presents a clear advantage over the LRPR-AM. This performance improvement is probably due to the fact that the EM algorithm, compared to the alternating minimization technique, is less prone to being trapped in undesirable local minima.

Next, we generate a matrix whose rank is fixed to be . The success rates of respective methods as a function of the number of measurements, , are plotted in Fig. 3, in which is set to , and , respectively. From Fig. 3, we see that to attain a same success recovery rate, our proposed method needs fewer measurements than the LRPR-AM method, which, again, demonstrates superiority of our proposed method.

Random initialization is a much simpler procedure and model-agnostic, which impels its wide use in practice. This raises an interesting question that whether our proposed LRPR-VBL algorithm with random initialization is still effective. We consider a setting similar to our previous experiment, and the initialization point is chosen uniformly random. LRPR-VBL+rand algorithm and LRPR-AM+rand stand for LRPR-VBL algorithm with random initialization and LRPR-AM algorithm with random initialization, respectively. We fix the dimension and change the rank from to . For each rank, we randomly generate the sampling vector in (2). Results are averaged over independent trials. As we can see from Fig. 4, our proposed method LRPR-VBL suffers from a certain amount of performance loss when the initialization point is chosen randomly. But it still performs better than the LRPR-AM method which completely fails with random initialization points. This implies that our proposed LRPR-VBL algorithm is less sensitive to the choice of the initialization point.

V Conclusions

The problem of low-rank phase retrieval was studied in this paper. To estimate a complex low-rank matrix from magnitude-only measurements, we proposed a hierarchical prior model, in which a Gaussian-Wishart hierarchical prior is placed on the underlying matrix to promote the low-rankness. Based on this hierarchical prior model, we developed a variational EM method for low-rank phase retrieval. Simulation results showed that our proposed method offers competitive performance as compared with other existing methods.

Fig. 4: Success rates of respective algorithms with random initializations vs. when .

References

  • [1] R. W. Harrison, “Phase problem in crystallography,” J. Opt. Soc. Amer. A, vol. 10, no. 5, pp. 1046–1055, 1993.
  • [2] R. P. Millane, “Phase retrieval in crystallography and optics,” J. Opt. Soc. Amer. A, vol. 7, no. 3, pp. 394–411, 1990.
  • [3] A. Walther, “The question of phase retrieval in optics,” J. Modern Opt., vol. 10, no. 1, pp. 41–49, 1963.
  • [4] C. Fienup and J. Dainty, “Phase retrieval and image reconstruction for astronomy,” Image Recovery: Theory and Application, pp. 231–275, 1987.
  • [5] O. Bunk, A. Diaz, F. Pfeiffer, C. David, B. Schmitt, D. K. Satapathy, and J. F. V. D. Veen, “Diffractive imaging for periodic samples: retrieving one-dimensional concentration profiles across microfluidic channels,” Acta Crystallograph. A, Found. Crystallogr.,, vol. 63, no. 4, pp. 306–314, 2007.
  • [6] R. Balan, P. Casazza, and D. Edidin, “On signal reconstruction without phase,” Appl. Comput. Harmon. Anal., vol. 20, no. 3, pp. 345–356, 2006.
  • [7] J. V. Corbett, “The pauli problem, state reconstruction and quantum-real numbers,” Rep. Math. Phys., vol. 57, no. 1, pp. 53–68, 2006.
  • [8] T. Heinosaari, L. Mazzarella, and M. M. Wolf, “Quantum tomography under prior information,” Commun. Math. Phys., vol. 318, no. 2, pp. 355–374, 2013.
  • [9] R. W. Gerchberg, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik, vol. 35, pp. 237–246, 1972.
  • [10] P. Netrapalli, P. Jain, and S. Sanghavi, “Phase retrieval using alternating minimization,” in Proc. Adv. Neural Inf. Process. Syst., Dec. 2013, pp. 2796–2804.
  • [11] E. J. Candés, T. Strohmer, and V. Voroninski, “Phaselift: Exact and stable signal recovery from magnitude measurements via convex programming,” Commun. Pure Appl. Math., vol. 66, no. 8, pp. 1241–1274, 2013.
  • [12] E. J. Candés, X. Li, and M. Soltanolkotabi, “Phase retrieval via wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory, vol. 61, no. 4, pp. 1985–2007, Apr. 2015.
  • [13] Y. Chen and E. Candés, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” Commun. Pure Appl. Math., vol. 70, no. 5, pp. 822–883, 2017.
  • [14] Z. Yuan and H. Wang, “Phase retrieval via reweighted wirtinger flow,” Appl. opt., vol. 56, no. 9, pp. 2418–2427, 2017.
  • [15] H. Zhang, Y. Zhou, Y. Liang, and Y. Chi, “A nonconvex approach for phase retrieval: Reshaped wirtinger flow and incremental algorithms,” J. Mach. Learn. Res., vol. 18, no. 1, pp. 5164–5198, 2017.
  • [16] C. Ma, K. Wang, Y. Chi, and Y. Chen, “Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion and blind deconvolution,” arXiv preprint arXiv:1711.10467, 2017.
  • [17] Z. Yang, L. F. Yang, E. X. Fang, T. Zhao, Z. Wang, and M. Neykov, “Misspecified nonconvex statistical optimization for phase retrieval,” arXiv preprint arXiv:1712.06245, 2017.
  • [18] G. Wang, G. B. Giannakis, and Y. C. Eldar, “Solving systems of random quadratic equations via truncated amplitude flow,” IEEE Trans. Inf. Theory, vol. 64, no. 2, pp. 773–794, Feb. 2018.
  • [19] G. Wang, G. B. Giannakis, Y. Saad, and J. Chen, “Phase retrieval via reweighted amplitude flow,” IEEE Trans. Signal Process., vol. 66, no. 11, pp. 2818–2833, Jun. 2018.
  • [20] S. Pinilla, J. Bacca, and H. Arguello, “Phase retrieval algorithm via nonconvex minimization using a smoothing function,” IEEE Trans. Signal Process., vol. 66, no. 17, pp. 4574–4584, Sep. 2018.
  • [21] Y. Chen, Y. Chi, J. Fan, and C. Ma, “Gradient descent with random initialization: Fast global convergence for nonconvex phase retrieval,” arXiv preprint arXiv:1803.07726, 2018.
  • [22] S. Bahmani and J. Romberg, “A flexible convex relaxation for phase retrieval,” Electron. J. Statist., vol. 11, no. 2, pp. 5254–5281, 2017.
  • [23] T. Goldstein and C. Studer, “Phasemax: Convex phase retrieval via basis pursuit,” IEEE Trans. Inf. Theory, vol. 64, no. 4, pp. 2675–2689, Apr. 2018.
  • [24] P. Hand and V. Voroninski, “An elementary proof of convex phase retrieval in the natural parameter space via the linear program phasemax,” arXiv preprint arXiv:1611.03935, 2016.
  • [25] M. L. Moravec, J. K. Romberg, and R. G. Baraniuk, “Compressive phase retrieval,” in Proceedings of SPIE Wavelets XII, vol. 6701, Sept. 2007, p. 67120.
  • [26] Y. Shechtman, A. Beck, and Y. C. Eldar, “GESPAR: Efficient phase retrieval of sparse signals,” IEEE Trans. Signal Process., vol. 62, no. 4, pp. 928–938, Feb. 2014.
  • [27] X. Li and V. Voroninski, “Sparse signal recovery from quadratic measurements via convex programming,” SIAM J. Math. Anal., vol. 45, no. 5, pp. 3019–3033, 2013.
  • [28] P. Schniter and S. Rangan, “Compressive phase retrieval via generalized approximate message passing,” IEEE Trans Signal Process., vol. 63, no. 4, pp. 1043–1055, Feb. 2015.
  • [29] Y. C. Eldar, P. Sidorenko, D. G. Mixon, S. Barel, and O. Cohen, “Sparse phase retrieval from short-time fourier measurements,” IEEE Signal Process. Lett., vol. 22, no. 5, pp. 638–642, 2015.
  • [30] T. T. Cai, X. Li, Z. Ma, et al., “Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow,” Ann. Statist., vol. 44, no. 5, pp. 2221–2251, 2016.
  • [31] M. Iwen, A. Viswanathan, and Y. Wang, “Robust sparse phase retrieval made easy,” Appl. Comput. Harmon. Anal, vol. 42, no. 1, pp. 135–142, 2017.
  • [32] R. Pedarsani, D. Yin, K. Lee, and K. Ramchandran, “Phasecode: Fast and efficient compressive phase retrieval based on sparse-graph codes,” IEEE Trans. Inf. Theory, vol. 63, no. 6, pp. 3663–3691, Jun. 2017.
  • [33] G. Jagatap and C. Hegde, “Fast, sample-efficient algorithms for structured phase retrieval,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 4924–4934.
  • [34] G. Wang, L. Zhang, G. B. Giannakis, M. Akçakaya, and J. Chen, “Sparse phase retrieval via truncated amplitude flow,” IEEE Trans. Signal Process., vol. 66, no. 2, pp. 479–491, Jan. 2018.
  • [35] P. Hand, O. Leong, and V. Voroninski, “Phase retrieval under a generative prior,” arXiv preprint arXiv:1807.04261, 2018.
  • [36] N. Vaswani, S. Nayer, and Y. C. Eldar, “Low-rank phase retrieval,” IEEE Trans. Signal Process., vol. 65, no. 15, pp. 4059–4074, Aug. 2017.
  • [37] L. Yang, J. Fang, H. Duan, H. Li, and B. Zeng, “Fast low-rank bayesian matrix completion with hierarchical gaussian prior models,” IEEE Trans. Signal Process., vol. 66, no. 11, pp. 2804–2817, Jun. 2018.
  • [38] A. Drémeau and F. Krzakala, “Phase recovery from a bayesian point of view: The variational approach,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Process.   IEEE, Apr. 2015, pp. 3661–3665.
  • [39] C. M. Bishop, Pattern recognition and machine learning.   Springer, 2007.
  • [40] P. J. Everson and C. N. Morris, “Inference for multivariate normal hierarchical models,” J. Roy. Statist. Soc. Ser. B, vol. 62, pp. 399–412, Mar. 2000.
  • [41] D. G. Tzikas, A. C. Likas, and N. P. Galatsanos, “The variational approximation for bayesian inference,” IEEE Signal Process. Mag., vol. 25, no. 6, pp. 131–146, 2008.
  • [42] M. Fornasier, S. Peter, S. Worm, and S. Worm, “Conjugate gradient acceleration of iteratively re-weighted least squares methods,” Comput. Opt. and Appl., vol. 65, no. 1, pp. 205–259, 2016.
  • [43] M. Al-Shoukairi, P. Schniter, and B. D. Rao, “A gamp-based low complexity sparse bayesian learning algorithm,” IEEE Trans. Signal Process., vol. 66, no. 2, pp. 294–308, Jan. 2018.
  • [44] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” Proc. Nat. Acad. Sci., vol. 106, no. 45, pp. 18 914–18 919, 2009.
  • [45] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in Proc. IEEE Int. Symp. Inf. Theory (ISIT).   IEEE, 2011, pp. 2168–2172.