Solving Phase Retrieval with a Learned Reference

07/29/2020 ∙ by Rakib Hyder, et al. ∙ University of California, Riverside 4

Fourier phase retrieval is a classical problem that deals with the recovery of an image from the amplitude measurements of its Fourier coefficients. Conventional methods solve this problem via iterative (alternating) minimization by leveraging some prior knowledge about the structure of the unknown image. The inherent ambiguities about shift and flip in the Fourier measurements make this problem especially difficult; and most of the existing methods use several random restarts with different permutations. In this paper, we assume that a known (learned) reference is added to the signal before capturing the Fourier amplitude measurements. Our method is inspired by the principle of adding a reference signal in holography. To recover the signal, we implement an iterative phase retrieval method as an unrolled network. Then we use back propagation to learn the reference that provides us the best reconstruction for a fixed number of phase retrieval iterations. We performed a number of simulations on a variety of datasets under different conditions and found that our proposed method for phase retrieval via unrolled network and learned reference provides near-perfect recovery at fixed (small) computational cost. We compared our method with standard Fourier phase retrieval methods and observed significant performance enhancement using the learned reference.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of phase retrieval refers to the challenge of recovering a real- or complex-valued signal from its amplitude measurements. This problem arises in diffraction imaging, X-ray crystallography, and ptychography [14, 15, 21, 35, 43]. Fourier phase retrieval is a special class of phase retrieval problems aimed at the recovery of a signal from the amplitude of its Fourier coefficients. Let us assume that Fourier amplitude measurements are given as

(1)

where

denotes the Fourier transform operator,

denotes the unknown signal or image, and denotes the measurement noise. Our goal is to recover given .

Fourier phase retrieval is essential in many applications, especially in optical coherent imaging. Classical methods for phase retrieval utilize the prior knowledge about the support and positivity of the signals [14, 15]. Subsequent work has considered the case where the unknown signal is structured and belongs to a low-dimensional manifold that is known a priori. Examples of such low-dimensional structures include sparsity [46, 27], low-rank [26, 12], or neural generative models [25, 28]. Other techniques like Amplitude flow [47] and Wirtinger flow use alternating minimization [7]. Many of these newer algorithms involve solving a non-convex problem using iterative, gradient-based methods; therefore, they need to be carefully initialized. The initialization technique of choice is spectral initialization, first proposed in the context of phase retrieval in [36], and extended to the sparse signal case in [46, 27].

Fourier phase retrieval problem does not satisfy the assumptions needed for successful spectral initialization and remains highly sensitive to the initialization choice. Furthermore, Fourier amplitude measurements have the so-called trivial ambiguities about possible shifts and flips of the images. Therefore, many Fourier phase retrieval methods test a number of random initializations with all possible flips and shifts and select the estimate with the best recovery error

[34].

In this paper, we assume that a known (learned) reference is added to the signal before capturing the Fourier amplitude measurements. The main motivation for this comes from the empirical observation that knowing a part of the image can often help resolve the trivial ambiguities [3, 18, 22]. We extend this concept and assume that a known reference signal is added to the target signal and aim to recover the target signal from the Fourier amplitude of the combined signal. Adding a reference may not feasible in all cases, but our method will be applicable whenever we can add a reference or split the target signal into known and unknown parts. We can describe the Fourier amplitude (phaseless) measurements with a known reference signal as

(2)

Similar reference-based measurements and phase retrieval problems also arise in holographic optical coherence imaging [37].

Our goal is to recover the signal from the amplitude measurements in (2). To do that, we implement a gradient descent method for phase retrieval. We present the algorithm as an unrolled network for a general system in Fig. 1. Every layer of the network implements one step of the gradient descent update. To minimize the computational complexity of the recovery algorithm, we seek to minimize the number of iterations (hence the layers in the network). In addition, we seek to learn the reference to maximize the accuracy of the recovered signal for a given number of iterations. The learned and reconstruction results for different datasets are summarized in Fig. 2.

1.1 Our Contributions

We present an iterative method to efficiently recover a signal from the Fourier amplitude measurements using a fixed number of iterations. To achieve this goal, we first learn a reference signal that can be added to the phaseless Fourier measurements to enable the exact solution of the phase retrieval problem. We demonstrate that the reference learned on a very small training set perform remarkably well on the test dataset.

Our main contributions can be summarized as follows.

  • The proposed method uses a fixed number of gradient descent iterations (i.e., fixed computational cost) to solve the Fourier phase retrieval problem.

  • We formulate the gradient descent method as an unrolled network that allows us to learn a robust reference signal for a class of images. We demonstrate that reference learned on a very small dataset performs remarkably well on diverse and large test datasets. To the best of our knowledge, this is the first work on learning a reference for phase retrieval problems.

  • We tested our method extensively on different challenging datasets and demonstrated the superiority of our method.

  • We demonstrate the robustness of our approach by testing it with the noisy measurements using the reference that was trained on noise-free measurements.

Figure 1: Our proposed approach for learning reference signal by solving phase retrieval using an unrolled network. Unrolled network has layers. Each layer gets amplitude measurements , reference , and estimate as inputs, and updates the estimate to . The operations inside layer are shown in the dashed box on the right, where and are both linear measurement operators, and is the adjoint operator of .

2 Related Work

Holography. Digital holography is an interferometric imaging technique that does not require the use of any imaging lens. Utilizing the theory of diffraction of light, a hologram can be used to reconstruct three-dimensional (3D) images [39]. With this advantage, holography can be used to perform simultaneous imaging of multidimensional information, such as 3D structure, dynamics, quantitative phase, multiple wavelengths, and polarization state of light [44]. In the computational imaging community, many attempts have been made in solving holographic phase retrieval using references, among which [3] has been very successful. Motivated by the reference design for holographic phase retrieval, we are trying to explore a way to design references for general phase retrieval.

Phase Retrieval. The phase retrieval problem has drawn considerable attention over the years, as many optical detection devices can only measure amplitudes of the Fourier transform of the underlying object (signal or image). Fourier phase retrieval is a particular instance of this problem that arises in optical coherent imaging, where we seek to recover an image from its Fourier modulus [14, 15, 41, 35, 43, 33]

. Existing algorithms for solving phase retrieval can be broadly classified into convex and non-convex approaches

[23]. Convex approaches usually solve a constrained optimization problem after lifting the problem. The PhaseLift algorithm [8] and its variations [17], [6] belong to this class. On the other hand, non-convex approaches usually depend on Amplitude flow [46, 45] and Wirtinger flow [7, 52, 11, 5]. If we know some structure of the signal a priori, it helps in the reconstruction. Sparsity is a very popular signal prior. Some of the approaches for sparse phase retrieval include [38, 32, 2, 24, 36, 5, 46]. Furthermore, [36, 27, 23] used minimization (AltMin)-based approach and [10] used total variation regularization to solve phase retrieval. Recently, various researchers have explored the idea of replacing the sparsity priors with generative priors for solving inverse problems. Some of the generative prior-based approaches can be found in [23, 28, 20, 42].

Data-Driven Approaches for Phase Retrieval.

The use of deep learning-based methods to solve computational imaging problems such as phase retrieval is becoming popular. Deep learning methods leverage the power of huge amounts of data and tend to provide superior performance compared to traditional methods while also run significantly faster with the acceleration of GPU devices. A few examples demonstrating the benefit of the data-driven approaches include

[34] for robust phase retrieval, [30] for Fourier ptychographic microscopy, and [40] for holographic image reconstruction.

Unrolled Network for Inverse Problem. Unrolled networks, which are constructed by unrolled iterations of a generic non-linear reconstruction algorithm, have also been gaining popularity for solving inverse problems in recent years [31, 13, 16, 48, 19, 50, 29, 4]. Iterative methods usually terminate the iteration when the condition satisfies theoretical convergence properties, thus rendering the number of iterations uncertain. An unrolled network has a fixed number of iterations (and cost) by construction and they produce good results in a small number of steps while enabling efficient usage of training data.

Reference Design. Fourier phase retrieval faces different trivial ambiguities because of the structure of Fourier transformation. As a phase shift in the Fourier domain results in a circular shift in the spatial domain, we will get the same Fourier amplitude measurements for any circular shift of the original signal. In recent papers [3, 51, 18, 22], authors tried to use side information with sparsity prior to mitigate these ambiguities. However, in those studies, the reference and target signal are separated by some margin. If the separation between target and reference is large enough, then the nonlinear PR problem simplifies to a linear inverse problem [1, 3].

In this paper, we consider the reference signal to be additive and overlapping with the target signal. To the best of our knowledge, there has not been any study on such unrestricted reference design. While driven by data, our approach for reference design uses training samples in a very efficient way. The number of training images required by our network is parsimonious without limiting its generalizability. The reference learned by our network provides robust recovery test images with different sizes. Apart from the great flexibility, our unrolled network uses a well-defined routine in each layer and demonstrates excellent interpretability as opposed to black-box deep neural networks.

3 Proposed Approach

We use the general formulation for the phase retrieval from amplitude measurements. The formulation can be extended for phase retrieval with squared amplitude measurement as well. In our setup, we model amplitude measurements of a target signal and a reference signal as , where and are linear measurement operators. Our goal is to learn a reference signal that provides us the best recovery of the target signal. We formulate this overall task as the following optimization problem:

(3)

where denotes the solution of the phase retrieval problem for a given reference . Our approach to learn and solve (3) can be divided into two nested steps: (1) Outer step updates to minimize the recovery error for phase retrieval and (2) inner step uses the learned to recover target images by solving phase retrieval.

To solve the (inner step) of phase retrieval problem, we use an unrolled network. Figure 1 depicts the structure of our phase retrieval algorithm. In the unrolled phase retrieval network, we have blocks to represent iterations of the phase retrieval algorithm. We minimize the following loss to solve the phase retrieval problem:

(4)

Every block of the unrolled phase retrieval network is equivalent to one gradient descent step for (4). For some value of reference estimate, , we can represent the target signal estimate after block of the unrolled network as

(5)

where is the gradient of with respect to at the given values of

. As the loss function in (

4) is not differentiable, we can redefine it as

(6)

where . The expression of gradient can be written as

(7)

where denotes the adjoint of . After blocks, we get the estimate of the target signal that we denote as .

In the learning phase, we are given a set of training signals, , which share the same distribution as our target signals. We initialize and with some initial (feasible) values. First we minimize the following loss with respect to :

(8)

We can rewrite (8) using the gradient recursion in (5) as

(9)

We can then use gradient descent to to minimize . We can represent the iteration of gradient descent step as

(10)

The expression for can be written as

(11)

where is a Jacobian matrix with rows and columns of the same size as and

, respectively. The measurement vector

is a function of during training. Since we model

as an unrolled network, we can think of the gradient step as a backpropagation step. To compute

, we backpropagate through the entire unrolled network. At the end of outer iteration, we will get our learned reference .

Once we have learned a reference, , we can use it to capture (phaseless) amplitude measurements as for target signal . To solve the phase retrieval problem, we perform one forward pass through the unrolled network. Pseudocodes for training and testing are provided in Algorithms 1,2.

In our Fourier phase retrieval experiments , where is the Fourier transform operation. To implement similar method for squared amplitude measurements, we can simply replace with . In all our experiments, we initialized as a zero vector whenever . We can also add additional constraints on the reference while minimizing the loss function in (9). In our experiments, we used target signals with intensity values in the range ; therefore, we restricted the range of entries in to as well. We discuss other constraints in the experiment section.

Input: Training signals , measurement operators, and .
Initialize
for  do
     for  do
         
         for  do
              
              
         end for
     end for
     
     
end for
Output: Optimal reference,
Algorithm 1 Learning Reference Signal
Input: Measurements , learned reference , measurement operators, and .
Initialize
for  do
     
     
end for
Output: Estimation of target signal
Algorithm 2 Solving Phase Retrieval via Unrolled Network

4 Experiments

(a) MNIST

(b) EMNIST

(c) Fashion MNIST

(d) SVHN

(e) CIFAR10

(f) CelebA
Figure 2: Reconstruction results using learned references. Each block (a)-(f) shows results for a different dataset: (left) learned reference with a colorbar; (middle) sample original images and reconstruction with PSNR on top; (right) histogram of PSNR over the entire test dataset (vertical dashed line represents the mean PSNR).

Figure 3: Phase retrieval results using learned and random references. First Row: Original test images. Second Row:

Reconstruction using random references with uniform distribution between

best result out of 100 trials. Third Row: Reconstruction using the reference learned on CelebA dataset and resized from to . (PSNR shown on top of images.)

Datasets. We have used MNIST digits, EMNIST letters, Fashion MNIST, CIFAR10, SVHN, CelebA datasets, and different well-known standard images for our experiments. We convert all images to grayscale and resize images to . Although there are tens of thousands training images in MNIST, EMNIST letters, Fashion MNIST, CIFAR10, and SVHN dataset, we have used only a few (e.g., 32) of them in training. We have shown that the references learned on the small number of training images perform remarkably well on the entire test dataset. MNIST, Fashion MNIST, and CIFAR10 test datasets contain 10000 test images each; EMNIST letters dataset contains 24800 test images; SVHN test dataset contains 26032 test images. We used 1032 images from CelebA and center-cropped and resized all of them to . We selected 32 images for training and the rest for testing.

We present the results for these different datasets using references learned from 32 images from the same dataset in Fig. 2. We present results for six standard images of size from [34] using a resized reference learned from CelebA dataset in Fig. 3.

Measurements. We simulated amplitude measurements of the 2D Fourier transform. We performed 4 times oversampling in the spatial domain for both reference and target signal. Unless otherwise mentioned, we consider our measurements to be noise-free. We also report results for noisy measurements.

4.1 Configurations of Reference (u)

The reference signal , which we are trying to learn, has a number of hyper-parameters that inherently affect the performance of the phase retrieval process. We considered several constraints on , including the support, size, range, position, and sparsity.

We tested reference signals with both complex and real values and found that has comparable results in the two domains. Since it is easy to physically create amplitude or phase-only reference signals, we constrain to be in the real domain; thus, and , represent height and width, respectively. The height and width of determine the overlapping area between the target signal and the reference. We found that with larger size tends to have better performance, especially when the value of is constrained to a small range. The intensity values of play a major role in its performance. If we constrain the value of to be within a certain range: , for all , we observed that bigger range of yields better performance. This is because when is unconstrained then we can construct a with a large norm. Consider the noiseless setting with quadratic measurements , the last term is the real value of the element-wise product of target and reference Fourier transforms. We can remove because it is known. If is large compared to , then we can also ignore the quadratic term and recover in a single iteration if all entries of are nonzero. To avoid this situation and make the problem stable in the presence of noise, we restricted the values in the reference to be in [0,1] range.

4.2 Setup of Training Samples and Sample Size

We observed that we can learn the reference signal from a small number of training images. In Table 1, we report test results for different reference signals learned on first images from MNIST training dataset for . We kept the signal and reference strength (i.e., the range of the signal) equal for this experiment. We observe that increasing the training size improves test performance. However, we can get reasonable reconstruction performance on large test datasets (10k+ images) with reference learned using only 32 images.

Train/Test MNIST EMNIST F. MNIST SVHN CIFAR10
Training size=32 66.54 58.72 57.81 57.51 41.60
Training size=128 76.25 64.16 55.86 59.50 44.34
Training size=512 79.14 62.34 52.01 59.78 48.90

Table 1: PSNR for different training sizes

4.3 Generalization of Reference on Different Classes

We are interested in evaluating the generalization of our learned reference. (i.e., how the reference performs when trained on one dataset and tested on another). In the comparison study, we took the reference trained on each dataset and then tested them on the remaining 4 datasets. The value range of the reference is between , the number of steps in the unrolled network is . We observed that when the datasets share great similarity (e.g., MNIST and EMNIST are both sparse digits or letters), the reference signal tends to work well on both datasets. Even when the datasets differ greatly in their distributions, the reference trained on one dataset provides good results on other datasets (with only a few dB of PSNR decrease in performance).

We also tested our method on shifted and rotated versions of test images. Results in Fig. 4 demonstrate that even though the reference was trained on upright and centered images, we can perfectly recover shifted and rotated images.

Our key insight about this generalization phenomenon is that the main challenge in Fourier phase retrieval methods is initialization and ambiguities that arise because of symmetries. We are able to solve these issues using a learned reference because of the following reasons: (1) A reference gives us a good initialization for the phase retrieval iterations. (2) The presence of a reference breaks the symmetries that arise in Fourier amplitude measurements. Moreover, we are not learning to solve the phase retrieval problem in an end-to-end manner or learn a signal-dependent denoiser to solve the inverse problem [34, 40]. We are learning reference signals to primarily help a predefined phase retrieval algorithm to recover the true signal from the phaseless measurements. Thus, the references learned on one class of images provide good results on other images, see Table 2. This study shows that the reference learned using our network has the ability to generalize to new datasets, thus making our method suitable for real-life applications where new test cases keep emerging.

(a) MNIST
(b) CIFAR10
Figure 4: Test results on shifted/flipped/rotated images using the reference learned on upright and centered (canonical) training images. (PSNR shown on top of images.)
Train/Test MNIST EMNIST F. MNIST SVHN CIFAR10
MNIST 66.54 55.12 40.87 41.87 31.72
EMNIST 72.84 58.72 52.18 55.42 48.16
F. MNIST 40.87 55.67 57.81 50.70 42.85
SVHN 41.87 46.76 49.60 57.51 51.54
CIFAR10 31.72 38.93 36.40 40.36 41.60
Table 2: PSNR with references trained and tested on different datasets

4.4 Noise Response

To test the robustness of our method in the presence of noise, we added Gaussian and Poisson noise at different levels to the measurements. Poisson noise or shot noise is the most common in the practical systems. We model the Poisson noise following the same approach as in [34]. We simulate the measurements as

(12)

where for Gaussian noise and for Poisson noise with . We varied

to generate noise at different signal-to-noise ratios. Poisson noise affects the larger measurements with higher strength than the smaller measurements. As the sensors can measure only positive measurements, we kept the measurements positive by applying ReLU function after noise addition. We can observe the effect of noise in Fig. 

5. Even though we did not add noise during training, we get reasonable reconstruction and performance degrades gracefully with increased noise.

(a) Gaussian
(b) Poisson
Figure 5: Reconstruction quality of the test images vs noise level of the measurements for different datasets. We learned the reference using noise-free measurements.

4.5 Random Reference versus Learned Reference

To demonstrate the advantage of the learned reference signal, we compared the performance of learned reference and random reference on some standard images. The results are shown in Fig. 3. The learned reference is trained using 32 images from CelebA dataset which we resized to . The test images used in Fig. 3 are , so we resized the learned reference from to . For random reference, we selected the entries of the reference uniformly at random from . We selected the best result out of 100 trials for every test image with random reference. We can observe from the results that our learned reference significantly outperforms the random reference even though the test image distribution is distinct from the training data. The number of steps of the unrolled network is .

4.6 Comparison with Existing Phase Retrieval Methods

We have shown comparison with other approaches in Table 3. We selected Kaczmarz [49] and Amplitude flow [11] for comparison using PhasePack package [9]. We also show Hybrid Input Output (HIO), which is similar to our phase retrieval routine without any reference. We observe that our approach with learned reference can outperform all other approaches on all the datasets. All the traditional phase retrieval methods suffer from the trivial circular shift, rotation, and flip ambiguities, thus produce significantly worse reconstruction than our method does. Our method uses a reference signal to simplify the initialization and removes the shift/reflect ambiguities. To mathematically explain this fact, a shifted or flipped version of would not give us the same Fourier measurements as if is chosen appropriately as we do with the learning procedure. As we showed in Fig. 5, our method can perfectly recover the shifted and flipped versions of the images using the reference that was trained with upright and centered images.

Methods MNIST EMNIST F. MNIST SVHN CIFAR10
HIO 9.04 8.42 9.65 19.87 14.70
Amplitude Flow 9.99 9.79 11.90 20.25 15.04
Kaczmarz 11.81 11.47 13.44 19.48 15.01
Flat Reference 18.21 17.24 16.56 20.89 15.81
Random Reference 36.87 28.41 27.27 36.45 25.57
Learned Reference (Ours) 66.54 58.72 57.81 57.51 41.60
Table 3: Comparison with existing phase retrieval methods

4.7 Effects of Number of Layers (K)

We tested our unrolled network with different numbers of layers (i.e., ) at training and test time. The results are summarized in Fig. 6. We first used the same values of for training and testing. We observed that as increases, the reconstruction quality (measured in PSNR) improves. Then we fixed or at training, but used different values of at testing. We observed that if we increase at the test time, PSNR improves up to a certain level and then it plateaus. The PSNR achieved with reference trained with is better than what the referenced trained with provided. These results provide us a trade-off between the reconstruction speed and quality. As we increase , the reconstruction quality improves but the reconstruction requires more steps (computations and time).

Finally, we learned a reference using and tested it on different images with . To our surprise, our method was able to produce reasonable quality reconstruction with this extreme setting. We present some single-step reconstructions of each data set in Fig. 7.

(a) Training K=Testing K
(b) Training K=1
(c) Training K=50
Figure 6: Reconstruction PSNR vs the number of blocks () in the unrolled network at training and testing. (a) is same for training and testing (shaded region shows times std of PSNR). (b) and (c) , but tested using different .

Figure 7: Single step reconstruction with reference in range . Each of the 6 sets (a)-(f) has the the ground truth in the first row. Second row is the reconstruction (PSNR shown on top of images.)

4.8 Localizing the Reference

We also evaluated the effect of localizing the reference to a small region. For example, the reference is constrained to be within a small block in the corner or the center of the target signal. We restricted to be an block and placed it in different positions. We found that corner positions provide better results as shown in Fig. 8. As we bring the reference support closer to the center, the quality of reconstruction deteriorates. This observation is related to the method in [3, 18, 1], where if the known reference signal is separated from the target signal, then the phase retrieval problem can be solved as a linear inverse problem.

Note that signal recovery from Fourier phase retrieval is equivalent to signal recovery from its autocorrelation. We can write the autocorrelation of target plus reference signals as . The first term is a quadratic function of , the second term is known, and the last two terms are linear functions of . If the supports for and are sufficiently separated, then we can separate the last two linear terms from the first two quadratic terms and recover by solving a linear problem. However, if and have a significant overlap, then we need to solve a nonlinear inverse problem as we do in this paper.

(a) MNIST
(b) CIFAR10
Figure 8: Performance of our method if the reference is an block placed at different positions. Fixing the minimum value at 0, we increased the maximum value of the reference we learn. We observe that the small reference placed in the corners performs better than the ones placed in the center.

5 Conclusion

We presented a framework for learning a reference signal to solve the Fourier phase retrieval problem. The reference signal is learned using a small number of training images using an unrolled network as a solver for the phase retrieval problem. Once learned, the reference signal serves as a prior which significantly improves the efficiency of the signal reconstruction in the phase retrieval process. The learned reference generalizes to a broad class of datasets with different distribution compared to the training samples. We demonstrated the robustness and efficiency of our method through extensive experiments.

Acknowledgments

We would like to thank the anonymous reviewers for their insightful comments and suggestions. The first two authors contributed equally in this work. This research was supported in parts by an ONR grant N00014-19-1-2264, DARPA REVEAL Program, and a Google Faculty Award.

References

  • [1] F. Arab and M. S. Asif (2020) Fourier phase retrieval with arbitrary reference signal. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1479–1483. Cited by: §2, §4.8.
  • [2] S. Bahmani and J. Romberg (2015) Efficient compressive phase retrieval with constrained sensing vectors. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 523–531. Cited by: §2.
  • [3] D.A. Barmherzig, J. Sun, P. Li, T.J. Lane, and E. Candès (2019) Holographic phase retrieval and reference design. Inverse Problems. Cited by: §1, §2, §2, §4.8.
  • [4] E. Bostan, U. S. Kamilov, and L. Waller (2018) Learning-based image reconstruction via parallel proximal algorithm. IEEE Signal Processing Letters 25 (7), pp. 989–993. Cited by: §2.
  • [5] T. Cai, X. Li, Z. Ma, et al. (2016) Optimal rates of convergence for noisy sparse phase retrieval via thresholded wirtinger flow. Ann. Stat. 44 (5), pp. 2221–2251. Cited by: §2.
  • [6] E. Candes, X. Li, and M. Soltanolkotabi (2015) Phase retrieval from coded diffraction patterns. Appl. Comput. Harmon. Anal. 39 (2), pp. 277–299. Cited by: §2.
  • [7] E. Candes, X. Li, and M. Soltanolkotabi (2015) Phase retrieval via wirtinger flow: theory and algorithms. IEEE Trans. Inform. Theory 61 (4), pp. 1985–2007. Cited by: §1, §2.
  • [8] E. Candes, T. Strohmer, and V. Voroninski (2013) Phaselift: exact and stable signal recovery from magnitude measurements via convex programming. Comm. Pure Appl. Math. 66 (8), pp. 1241–1274. Cited by: §2.
  • [9] R. Chandra, Z. Zhong, J. Hontz, V. McCulloch, C. Studer, and T. Goldstein (2017) PhasePack: a phase retrieval library. Asilomar Conference on Signals, Systems, and Computers. Cited by: §4.6.
  • [10] H. Chang, Y. Lou, M.K. Ng, and T. Zeng (2016) Phase retrieval from incomplete magnitude information via total variation regularization. SIAM Journal on Scientific Computing 38 (6), pp. A3672–A3695. Cited by: §2.
  • [11] Y. Chen and E. Candes (2015) Solving random quadratic systems of equations is nearly as easy as solving linear systems. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 739–747. Cited by: §2, §4.6.
  • [12] Z. Chen, G. Jagatap, S. Nayer, C. Hegde, and N. Vaswani (2018-04) Low rank fourier ptychography. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 6538–6542. External Links: ISSN 2379-190X Cited by: §1.
  • [13] S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein (2017) Unrolled optimization with deep priors. arXiv preprint arXiv:1705.08041. Cited by: §2.
  • [14] J. R. Fienup (1982) Phase retrieval algorithms: a comparison. Applied optics 21 (15), pp. 2758–2769. Cited by: §1, §1, §2.
  • [15] R. W. Gerchberg (1972) A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, pp. 237–246. Cited by: §1, §1, §2.
  • [16] K. Gregor and Y. LeCun (2010) Learning fast approximations of sparse coding. In

    Proceedings of the 27th International Conference on International Conference on Machine Learning

    ,
    pp. 399–406. Cited by: §2.
  • [17] D. Gross, F. Krahmer, and R. Kueng (2017) Improved recovery guarantees for phase retrieval from coded diffraction patterns. Appl. Comput. Harmon. Anal. 42 (1), pp. 37–64. Cited by: §2.
  • [18] M. Guizar-Sicairos and J.R. Fienup (2007) Holography with extended reference by autocorrelation linear differential operation. Optics express 15 (26), pp. 17592–17612. Cited by: §1, §2, §4.8.
  • [19] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll (2018) Learning a variational network for reconstruction of accelerated mri data. Magnetic resonance in medicine 79 (6), pp. 3055–3071. Cited by: §2.
  • [20] P. Hand, O. Leong, and V. Voroninski (2018) Phase retrieval under a generative prior. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 9154–9164. Cited by: §2.
  • [21] R. Harrison (1993) Phase problem in crystallography. JOSA a 10 (5), pp. 1046–1055. Cited by: §1.
  • [22] R. Hyder, C. Hegde, and M.S. Asif (2019) FOURIER phase retrieval with side information using generative prior. In Proc. Asilomar Conf. Signals, Systems, and Computers, Cited by: §1, §2.
  • [23] R. Hyder, V. S., C. Hegde, and M.S. Asif (2019) Alternating phase projected gradient descent with generative priors for solving compressive phase retrieval. In Proc. IEEE Int. Conf. Acoust., Speech, and Signal Processing (ICASSP), pp. 7705–7709. Cited by: §2.
  • [24] K. Jaganathan, S. Oymak, and B. Hassibi (2012) Recovery of sparse 1-d signals from the magnitudes of their fourier transform. In Proc. IEEE Int. Symp. Inform. Theory (ISIT), pp. 1473–1477. Cited by: §2.
  • [25] G. Jagatap, Z. Chen, C. Hegde, and N. Vaswani (2018-04) Sub-diffraction imaging using fourier ptychography and structured sparsity. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. , pp. 6493–6497. External Links: ISSN 2379-190X Cited by: §1.
  • [26] G. Jagatap, Z. Chen, S. Nayer, C. Hegde, and N. Vaswani (2020) Sample efficient fourier ptychography for structured data. IEEE Transactions on Computational Imaging 6 (), pp. 344–357. External Links: ISSN 2573-0436 Cited by: §1.
  • [27] G. Jagatap and C. Hegde (2017) Fast, sample-efficient algorithms for structured phase retrieval. In Advances in Neural Information Processing Systems, pp. 4917–4927. Cited by: §1, §2.
  • [28] G. Jagatap and C. Hegde (2019) Algorithmic guarantees for inverse imaging with untrained network priors. In Advances in Neural Information Processing Systems, pp. 14832–14842. Cited by: §1, §2.
  • [29] U. S. Kamilov and H. Mansour (2016) Learning optimal nonlinearities for iterative thresholding algorithms. IEEE Signal Processing Letters 23 (5), pp. 747–751. Cited by: §2.
  • [30] M. Kellman, E. Bostan, M. Chen, and L. Waller (2019) Data-driven design for fourier ptychographic microscopy. International Conference for Computational Photography, pp. 1–8. Cited by: §2.
  • [31] M. R. Kellman, E. Bostan, N. A. Repina, and L. Waller (2019) Physics-based learned design: optimized coded-illumination for quantitative phase imaging. IEEE Transactions on Computational Imaging 5 (3), pp. 344–353. Cited by: §2.
  • [32] X. Li and V. Voroninski (2013) Sparse signal recovery from quadratic measurements via convex programming. SIAM J. on Math. Analysis 45 (5), pp. 3019–3033. Cited by: §2.
  • [33] A. Maiden and J. Rodenburg (2009) An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy 109 (10), pp. 1256–1262. Cited by: §2.
  • [34] C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk (2018) PrDeep: robust phase retrieval with a flexible deep network. In Proc. Int. Conf. Machine Learning, Cited by: §1, §2, §4.3, §4.4, §4.
  • [35] R. Millane (1990) Phase retrieval in crystallography and optics. JOSA A 7 (3), pp. 394–411. Cited by: §1, §2.
  • [36] P. Netrapalli, P. Jain, and S. Sanghavi (2013) Phase retrieval using alternating minimization. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 2796–2804. Cited by: §1, §2.
  • [37] D. D. Nolte (2011) Optical interferometry for biology and medicine. Vol. 1, Springer Science & Business Media. Cited by: §1.
  • [38] H. Ohlsson, A. Yang, R. Dong, and S. Sastry (2012) CPRL–an extension of compressive sensing to the phase retrieval problem. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 1367–1375. Cited by: §2.
  • [39] I. Park, R. Middleton, C. R. Coggrave, P. D. Ruiz, and J. M. Coupland (2018) Characterization of the reference wave in a compact digital holographic camera. Applied optics 57 (1), pp. A235–A241. Cited by: §2.
  • [40] Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan (2018) Phase recovery and holographic image reconstruction using deep learning in neural networks. Light: Science & Applications 7 (2), pp. 17141–17141. Cited by: §2, §4.3.
  • [41] J. M. Rodenburg (2008) Ptychography and related diffractive imaging methods. Advances in imaging and electron physics 150, pp. 87–184. Cited by: §2.
  • [42] F. Shamshad and A. Ahmed (2018) Robust compressive phase retrieval via deep generative priors. arXiv preprint arXiv:1808.05854. Cited by: §2.
  • [43] Y. Shechtman, Y. Eldar, O. Cohen, H. Chapman, J. Miao, and M. Segev (2015) Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal Processing Mag. 32 (3), pp. 87–109. Cited by: §1, §2.
  • [44] T. Tahara, X. Quan, R. Otani, Y. Takaki, and O. Matoba (2018) Digital holography and its multidimensional imaging applications: a review. Microscopy 67 (2), pp. 55–67. Cited by: §2.
  • [45] G. Wang and G. Giannakis (2016) Solving random systems of quadratic equations via truncated generalized gradient flow. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 568–576. Cited by: §2.
  • [46] G. Wang, L. Zhang, G. B. Giannakis, M. Akcakaya, and J. Chen (2018) Sparse phase retrieval via truncated amplitude flow. IEEE Trans. Signal Processing 66, pp. 479–491. Cited by: §1, §2.
  • [47] G. Wang, G. Giannakis, Y. Saad, and J. Chen (2017) Solving most systems of random quadratic equations. In Advances in Neural Information Processing Systems, pp. 1867–1877. Cited by: §1.
  • [48] S. Wang, S. Fidler, and R. Urtasun (2016) Proximal deep structured models. In Advances in Neural Information Processing Systems, pp. 865–873. Cited by: §2.
  • [49] K. Wei (2015) Solving systems of phaseless equations via kaczmarz methods: a proof of concept study. Inverse Problems 31 (12), pp. 125008. Cited by: §4.6.
  • [50] Y. Yang, J. Sun, H. Li, and Z. Xu (2016) Deep admm-net for compressive sensing mri. In Advances in neural information processing systems, pp. 10–18. Cited by: §2.
  • [51] Z. Yuan and H. Wang (2019-05) Phase retrieval with background information. Inverse Problems 35 (5), pp. 054003. Cited by: §2.
  • [52] H. Zhang and Y. Liang (2016) Reshaped wirtinger flow for solving quadratic system of equations. In Proc. Adv. in Neural Inf. Proc. Sys. (NeurIPS), pp. 2622–2630. Cited by: §2.