Physics-based Learned Design: Optimized Coded-Illumination for Quantitative Phase Imaging

08/10/2018 ∙ by Michael R. Kellman, et al. ∙ 2

Coded-illumination based reconstruction of Quantitative Phase (QP) is generally a non-linear iterative process. Thus, using traditional techniques for experimental design (e.g. condition number optimization or spectral analysis) may not be ideal as they characterize linear measurement formation models for linear reconstructions. Deep neural networks, DNNs, are end-to-end frameworks which can efficiently represent non-linear processes and can be optimized over by training. However, they do not necessarily include knowledge of the system physics and, as a result, require an enormous number of training examples and parameters to properly learn the phase retrieval process. Here, we present a new data-driven approach to optimize the coded-illumination patterns of a light-emitting diode (LED) array microscope to maximize a given QP reconstruction algorithm's performance. We establish a generalized formulation that incorporates the available information about the physics of a measurement model as well as the non-linearities of a reconstruction algorithm into the design problem. Our proposed design method enables an efficient parameterization of the design problem, which allows us to use only a small number of training examples to properly learn a design that generalizes well in the experimental setting without retraining. We image both a well-characterized phase target and mouse fibroblast cells using coded-illumination patterns optimized for a sparsity-based phase reconstruction algorithm. We obtain QP images similar to those of Fourier Ptychographic techniques with 69 measurements using only 2 learned design measurements.



There are no comments yet.


page 2

page 4

page 9

page 10

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Fig. 1: Learning coded-illumination designs for quantitative phase imaging: (a) The LED-array microscope captures multiple intensity measurements with different coded-illumination source patterns. (b) The measurements are used to computationally reconstruct the sample’s complex-field using an iterative phase recovery algorithm. (c) An optimization procedure for learning optimal coded-illumination patterns updates the illumination design.

Quantitative Phase Imaging (QPI) enables stain-free and label-free microscopy of transparent biological samples in vitro [1, 2]. When compared with coherent methods [3, 4], QPI methods that use partially coherent light achieve higher spatial resolution, more light throughput, and reduced speckle artifacts. Phase contrast may be generated using interference [5, 6] or defocus [7, 8, 9]. More recently, coded-illumination microscopy [10, 11, 12, 13, 14, 15] has been demonstrated as an accurate and inexpensive QPI scheme. To realize coded-illumination, we replace a commercial microscope’s illumination unit with a light-emitting diode (LED) domed array (see Fig. 1[16]

. This provides a flexible hardware platform for various QPI applications including super-resolution 

[10, 11, 13], multi-contrast [12, 17], and 3D imaging [14, 15].

Coded-illumination microscopy uses asymmetric source patterns [18] and multiple measurements to retrieve 2D phase information. Quantitative Differential Phase Contrast [19, 20, 21, 22] (qDPC), for example, captures four measurements with rotated half-circle source patterns, from which the phase is computationally recovered using a partially coherent linearized model. The practical performance of qDPC is predominantly determined by how the phase information is encoded in (via coded-illumination patterns) and decoded from (via phase recovery) the intensity measurements.

The half-circle illumination designs of qDPC were derived analytically based on a Weak Object Approximation [23, 24, 20, 21] which linearizes the physics in order to make the inverse problem mathematically convenient. This linearized model enables one to derive a phase transfer function and analyze the spatial frequency coverage of any given source pattern [21, 22, 25, 26]; however, the non-linearity of the exact model makes it impossible to predict an optimal source design without knowing the sample’s phase a priori. In addition, these types of analysis are inherently restricted to linear reconstruction algorithms and will not necessarily result in improved accuracy when the phase is retrieved via non-linear iterative methods.

Motivated by the success of deep learning

[27] for image reconstruction problems [28, 29, 30, 31, 32, 33], data-driven approaches have been adopted for learning coded-illumination patterns. For instance, researchers have used machine learning to maximize the phase contrast of each coded-illumination measurement [34], to improve accuracy on classification tasks [35], and to reconstruct phase [36]

. All of these techniques learn the input-output relationship with a deep convolutional neural network (CNN) using training data. It is not straightforward to include the well-characterized system physics; hence, the CNN is required to learn both the physical measurement formation and the phase reconstruction process. This task requires training of 10s to 100s of thousands of parameters and an immense number of training examples.

Here, we introduce a new data-driven approach to optimizing the source pattern design for coded-illumination phase retrieval by directly including both the system physics and the non-linear nature of a reconstruction algorithm in the learning process. Our approach unrolls the iterations of a generic non-linear reconstruction algorithm to construct an unrolled network [37, 38, 39, 40, 41, 42]. Similar to CNNs, our unrolled network consists of several layers (one for each iteration); however, in our case each layer consists of well-specified operations to incorporate measurement formation and sparse regularization, instead of standard operations such as generic convolutions. The key aspects of our approach are:

  • incorporation of the system physics and reconstruction non-linearities in the illumination design process.

  • efficient parameterization of the unrolled network.

  • incorporation of practical constraints.

  • reduced number of training examples required.

We deploy our data-driven approach to learn improved coded-illumination patterns for phase reconstruction. Each layer of the unrolled network is parameterized by only a few variables (LED brightness values), enabling an efficient use of training data ( simulated training examples). We compare the QPI performance of our learned designs to previous work and demonstrate that our designs generalize well to the experimental setting with biological samples.

Ii Quantitative Phase Imaging

qDPC recovers a sample’s complex transmittance function from several coded-illumination measurements. The phase recovery optimization algorithm aims to minimize the Euclidean norm of the error between the measurements and the expected measurements formed with the current phase estimate. Using a gradient-based procedure, the phase estimate is iteratively updated until convergence. For a partially coherent source, the phase can be recovered with resolution up to twice the coherent diffraction limit. In this section, we describe the measurement formation process and phase recovery optimization.

Ii-a System Modelling

A thin sample’s transmission function can be approximated as a 2D complex function, , characterized by its absorption, , and phase, , where are 2D spatial coordinates, is the wavelength of the illumination, is the physical thickness of the sample, and is the change in refractive index from the background. Intensity measurements, , of the sample are a non-linear function of , mathematically described by,


where denotes squared absolute value, denotes convolution, denotes elementwise multiplication, is the illumination’s complex-field at the sample plane and is the point spread function (PSF) of the microscope. The illumination from each LED is approximated as a tilted plane wave, , with tilt angle, , determined by the physical position of the LED relative the microscope [43].

Because the measured image in Eq. 1 is non-linear with respect to the sample’s transmission function, recovering phase generally requires non-convex optimization. However, biological samples in closely index-matched fluid have a small scatter-scatter term. This means that a weak object approximation can be made; linearizing the measurement formation model such that phase recovery requires only a linear deconvolution of the measurements with their respective weak object transfer functions (WOTFs) [23, 20, 24, 22, 21]. Further, unstained biological samples are predominantly phase objects since they are only weakly absorbing (i.e. is small). With these approximations, we can express each intensity measurement as a linear system with contributions from the background and phase contrast. In Fourier space,



denotes Fourier transform,

are 2D spatial-frequency coordinates, is the measurement’s background energy concentrated at the DC and is the phase WOTF. The phase WOTFs are a function of the illumination source and the pupil distribution of the microscope [21]. For a single LED the WOTF is:


where is the correlation operator, defined as for in the domain of and .

In [21], multiple LEDs are turned on simultaneously to increase signal-to-noise (SNR) and improve phase contrast. Because the fields generated by each LED’s illumination are spatially incoherent with each other, the measurement from multiple LEDs will simply be the weighted sum of each LED’s individual measurement, where the weights correspond to the LEDs’ brightness values. The phase WOTF for illumination by multiple LEDs will also be the weighted sum of the single-LED phase WOTFs. Mathematically,


where is the set of LEDs turned on and are the LEDs’ brightness values.

Following common practice [44]

, we discretize the 2D spatial distributions and format them as vectors (bold lower case) (

e.g. represents the transfer function’s 2D spatial-frequency distribution and represents the 2D spatial phase distribution). The measurements111In practice, typically refers to the so-called flattened image, where the background energy in (2) is removed via background subtraction. are described in Fourier space as with system function .

Based on this model, we define as the Fourier transform of single LED measurements, , along the columns. Then, is defined as the single-LED weights for each of measurements, and is the column of . The product simulates the multiple-LED measurement. Similarly, we define as single LED phase WOTFs, along the columns, such that the product gives the corresponding multiple-LED phase WOTF for the measurement.

Ii-B Phase Recovery

Phase recovery using the forward model in Sec. II-A can be formulated as a regularized linear inverse problem,


where is the recovered phase, is the number of measurements acquired, is the Fourier transform of the measurement and is a user-chosen regularizer. We solve this optimization problem efficiently using the accelerated proximal gradient descent (APGD) algorithm by iteratively applying an acceleration update, a gradient update and a proximal update [45, 46]. The algorithm is detailed in Alg. 1, where is the gradient step size, is the number of iterations, and are intermediate variables, is the acceleration parameter derived by the recursion,  [46], and is the proximal operator corresponding to the user-chosen regularizer  [45].

Fig. 2: Unrolled physics-based network: Feed-forward schematic for the unrolled accelerated proximal gradient descent (APGD) network for iterations (dark blue box). The network takes intensity measurements, , parameterized by the coded-illumination design, , as input and outputs the reconstructed phase, . Finally, the output is compared with the ground truth phase,

, using a user-chosen loss function,

(pink box). The inset into a single () iteration (light blue box) shows each iteration’s three steps: acceleration update, gradient update, and proximal update.
1:procedure APGD()
3:     for  do
7:     end for
8:     return
9:end procedure
Algorithm 1 Accelerated Proximal Gradient Descent (APGD) for Phase Recovery

Iii Physics-Based Learned Design

Given the phase recovery algorithm in Sec. II-B, we now describe our main contribution of learning the coded-illumination designs for a given reconstruction algorithm and training set.

Iii-a Unrolled Physics-based Network

Traditionally, DNNs contain many layers of weighted linear mixtures and non-linear activation functions 

[27]. Here, we consider specific linear functions which capture the system physics of measurement formation and specific non-linear activation functions which promote sparsity. Starting from Alg. 1, we treat each iteration as a layer such that when unrolled they form a network of layers, denoted (Fig. 2). Each layer of contains a module for each of the iterative algorithm’s updates (i.e. an acceleration module, a gradient module (incorporates system physics), and a proximal module (incorporates sparsity)). The regularization and step size parameters specified for Alg. 1 are fixed. The network’s inputs comprise and the network’s output is . The design parameters of the network, which will be learned, govern the relative brightness of the LEDs and are incorporated in the measurement formation and the system WOTFs.

Iii-B Learning Objective

Our learning objective is to minimize the phase reconstruction error of the training data over the space of possible LED configurations, subject to constraints that enforce physical feasibility and eliminate degenerate and trivial solutions:

s.t. (non-negativity) (9)
(scale) (10)
(geometric) (11)



Here, are training pairs for which is a matrix of the Fourier transform of single-LED measurements for the sample with optical phase, . is the elementwise product operator, is a geometric constraint mask for the measurement, and is the null vector.

The non-negativity constraint (Eq. 9) prevents non-physical solutions by enforcing the brightness of each LED to be greater than or equal to zero. This is enforced by projecting the parameters onto the set of non-negative real numbers. The scale constraint (Eq. 10) enforces that each coded-illumination design must have weights with sum equal to 1, in order to eliminate arbitrary scalings of the same design. This is enforced by scaling the parameters for each measurement such that their sum is one. The geometric constraint (Eq. 11) enforces that the coded-illumination designs do not use conjugate-symmetric LED pairs to illuminate the sample within the same measurement, since these would also result in degenerate solutions (e.g. two symmetric LEDs produce opposite phase contrast measurements that would cancel each other out). To prevent this, we force the source patterns for each measurement to reside within only one of the major semi-circle sets (e.g. top, bottom, left, right). This constraint is enforced by setting the LED brightnesses outside the allowed semi-circle to zero.

We solve Eq. 8 iteratively via accelerated projected gradient descent (Alg. 2). At each iteration, the coded-illumination design for each measurement is updated with the analytical gradient, projected onto the constraints (denoted by ) and updated again with a contribution from the previous iteration (weighted by ). enforces the constraints in the following order: non-negativity, geometric, and scale.

1:procedure PBLD()
2:     for  do Gradient descent loop
3:         for  do Training data loop
6:         end for
9:     end for
10:     return
11:end procedure
Algorithm 2 Physics-based Learned Design Algorithm

Iii-C Gradient Update

The gradient of the loss function (Eq. 8) with respect to the design parameters has contributions at every layer of the unrolled network through both the measurement terms, , and the phase WOTF terms, , for each measurement . Here, we outline our algorithm for updating the coded-illumination design weights via a two-step procedure: backpropagating the error from layer-to-layer and computing each layer’s gradient contribution. For simplicity, we outline the gradient update for only a single training example, , as the gradient for all the training examples is the sum of their individual gradients.

Unlike pure gradient descent, where each iteration’s estimate only depends on the previous’, accelerated methods like Alg. 1 linearly combine the previous two iteration’s estimates to improve convergence. As a consequence, backpropagating error from layer-to-layer requires contributions from two successive layers. Specifically, we compute the error at all layers with the recursive relation,


where each partial gradient constitutes a single step in Alg. 1 (fully derived in the supplement).

With the backpropagated error at each layer, we compute the gradient of the loss function with respect to as,


for which,


Here, backpropagates the error through the proximal operator and other partials with respect to relate the backpropagated error at each layer to the changes in . Derivations of these partial gradients are included in the supplementary material. In Alg. 3, we unite these two steps to form a recursive algorithm which efficiently computes the analytic gradient for a single training example. Alternatively, general purpose auto-differentiation included in learning libraries (e.g.PyTorch, TensorFlow) can be used to perform the gradient updates.

1:procedure Backpropagation(BP)()
2:     for  do
7:     end for
8:     return
9:end procedure
Algorithm 3 Gradient Update for Single Training Example
Fig. 3: Coded-illumination designs and their corresponding phase weak object transfer functions (WOTFs) for: (a) Traditional qDPC and (b) learned designs for the case where 4, 3, or 2 measurements are allowed for each phase reconstruction. The illumination source patterns are in the upper left corners, with gray semi-circles denoting where the LEDs are constrained to be “off”.

Iv Results

Our proposed method learns the coded-illumination design for a given reconstruction and training set (Fig. 3b), yet up to this point we have not detailed specific parameters of our phase reconstruction. In our results, we set the parameters of our reconstruction algorithm (Alg. 1) to have a fixed CPU time by fixing the number of iterations at and the step size to (see supplement for parameter analysis). In addition, the regularization term, , has been defined generally (e.g. penalty, total variation (TV) penalty [47], BM3D [48]). Here, we choose to enforce TV-based sparsity:


where is set to trade off the TV cost with the data consistency cost and is the first-order difference operator along the image dimension. We efficiently implement the proximal operator of Eq. 17 in closed form via parallel proximal method [42, 49, 50] (details in supplement).

Iv-a Learning

Fig. 4: Phase reconstruction results using simulated measurements with different coded-illumination designs. We compare results from: traditional qDPC (half-circles), annular illumination, condition number optimization, A-optimal design, and our proposed physics-based learned designs. We show results for the cases of (a) four, (b) three, and (c) two measurements allowed for each phase reconstruction. Absolute error maps are shown below each reconstruction.

To train our coded-illumination design parameters using Alg. 2, we generate a dataset of 100 examples (90 for training, 10 for testing). Each example contains ground truth phase from a small region () of a larger image and simulated single LED measurements (using Eq. 1). The LEDs are uniformly spaced within a circle such that each single-LED intensity measurement is a brightfield measurement. The physical system parameters used to generate the phase WOTFs and simulate the training data measurements are , , , and . To train, we use cost between reconstructed phase and ground truth phase as our loss function and approximate the full gradient of Eq. 8 with a batch gradient from random batches of of the training pairs at each iteration. We use a learning rate of (training and testing convergence curves are provided in the supplement). The training is performed on a multi-core CPU (Dual-socket Intel Xeon® E5 Processor @ 2.1GHz with cores and GB of RAM) and batch updates are computed in parallel with each training example on a single core. Each batch update takes  6 seconds. 200 updates are performed, resulting in a total training time of 20 minutes.

# Meas. Random Traditional Annular Cond. Number A-optimal Physics-based
Illumination qDPC Illumination Optimization Design Learned Design
4 12.30 2.12 15.67 2.19 20.40 2.09 20.37 2.41 17.94 2.54 28.46 2.50
3 12.33 2.12 15.28 2.18 20.44 2.26 19.33 2.03 18.05 2.59 28.04 2.59
2 12.25 2.12 14.87 2.23 20.21 2.24 17.19 2.28 18.08 2.64 23.73 2.18

PSNR Results: Average and standard deviation PSNR (dB) of phase reconstructions from the simulated testing examples using different illumination schemes and different numbers of measurements. Factor format: Mean

Fig. 5: USAF phase target reconstructions: Experimental comparison between phase results with (a) Fourier Ptychography (FP) using 69 images, (b) traditional qDPC and (c) learned designs, for the case of 4, 3, and 2 measurements. Error maps show the difference from the FP reconstruction. (d) Cross-sections show that phase from our learned designs (long-dashed red) is closer to that of FP (solid blue) than traditional qDPC (short-dashed green).
Fig. 6: 3T3 mouse fibroblast cells reconstructions: Experimental comparison between phase results with (a) Fourier Ptychography (FP) using 69 measurements, (b) traditional qDPC and (c) learned designs, for the case of 4, 3, and 2 measurements. Error maps show the difference from the FP reconstruction. (d) Cross-sections show that phase from our learned designs (long-dashed red) is closer to that of FP (solid blue) than traditional qDPC (short-dashed green).

Traditional qDPC uses 4 measurements to adequately cover frequency space. Our learned designs are more efficient and may require fewer measurements; hence, we show learned designs for the cases of 4, 3 and 2 measurements. The designs and their corresponding phase WOTFs are shown in Fig. 3.

Comparing our learned designs with previous work, Fig. 4 shows the phase reconstruction for a single simulated test example using 4, 3 and 2 measurements. The ground truth phase is compared with the phase reconstructed using traditional qDPC designs [21], annular illumination designs [21], condition number optimized designs [51], A-optimal designs [52], and our physics-based learned designs. Table I reports the peak SNR (PSNR) statistics (mean and standard deviation) for the phase reconstructions from evaluated on our set of testing examples. Our learned designs give significant improvement, recovering both the high and low frequencies more accurately.

Iv-B Experimental Validation

To demonstrate that our learned designs generalize well in the experimental setting, we implement our method on an LED array microscope. A commercial Nikon TE300 microscope is equipped with a custom quasi-Dome [16] illumination system (581 programmable RGB LEDs: nm, nm, nm) and a PCO.edge 5.5 monochrome camera (, pixel pitch, 16 bit). We image two samples: a USAF phase target (Benchmark Technologies) and fixed 3T3 mouse fibroblast cells (prepared as detailed in the supplement). In order to validate our method, we compare results against phase experimentally estimated via pupil-corrected Fourier Ptychography (FP) [43, 53, 13] with equivalent resolution. FP is expected to have good accuracy, since it uses significantly more measurements (69 single-LED measurements) and a non-linear reconstruction process.

Using the USAF target, we compare phase reconstructions from FP with traditional qDPC and our learned design measurements (Fig. 5). Traditional qDPC reconstructions consistently under-estimate the phase values. However, phase reconstructions using our learned design measurements are similar to phase estimated with FP. As the number of measurements is reduced, the performance quality of the reconstruction using traditional qDPC degrades, while the reconstruction using the learned design remains accurate.

To demonstrate our method with live biological samples, we repeated the experiments with 3T3 mouse fibroblast cells. Figure 6 shows that phase reconstructions from traditional qDPC again consistently under-estimate phase values, while phase reconstructions using learned design measurements match the phase estimated with FP well.

V Discussion

Our proposed experimental design method efficiently learns the coded-illumination designs by incorporating both the system physics and the non-linear nature of iterative phase recovery. Learned designs with only 2 measurements can efficiently reconstruct phase with quality similar to Fourier Ptychography ( measurements) and better than qDPC ( measurements), giving an improvement in temporal resolution by a factor of 2 over traditional qDPC and far fewer than FP. Additionally, we demonstrate (Table I) that the performance of our designs on a set of testing examples is superior to previously-proposed coded-illumination designs. Visually, our learned design reconstructions closely resemble the ground truth phase, with both low-frequency and high-frequency information accurately recovered.

By parameterizing our learning problem with only a few weights per measurement, our method can efficiently learn an experimental design with a small simulated dataset. This enables fast training and reduces computing requirements significantly. Obtaining large experimental datasets for training may be difficult in microscopy, so it is important that our method can be trained on simulated data only. Experimental results in Sec. IV-B show similar quality to simulated results, with both using the designs learned from simulated data only.

Finally, phase recovery with the learned designs’ measurements are trained with a given number of reconstruction iterations (e.g. determined by a CPU budget). This makes our method particularly well-suited for real-time processing. qDPC can also be implemented in real-time, but limiting the compute time for the inverse problem (by restricting the number of iterations) limits convergence and causes low-frequency artifacts. Our learned designs incorporate the number of iterations (and hence processing time) into the design process, producing high-quality phase reconstructions within a reasonable compute time.

Vi Outlook

Our method is general to the problem of experimental design. Similar to QPI, many fields (e.g. Magnetic resonance imaging (MRI), fluorescence microscopy) use physics-based non-linear iterative reconstruction techniques to achieve state-of-the-art performance. With the correct model parameterization and physically-relevant constraints, our method could be applied to learn optimal design for these applications (e.g. undersampling patterns for compressed sensing MRI [54], PSFs for fluorescence microscopy [55]).

Requirements for applying our method are simple: the reconstruction algorithm’s updates must be differentiable (e.g. gradient update and proximal update) so that analytic gradients of the learning loss can be computed with respect to the design parameters. Of practical importance, the proximal operator of the regularizer should be chosen so that it has a closed form. While this is not a strict requirement, if the operator itself requires an additional iterative optimization, error will have to be backpropagated through an excessive number of iterations. Here, we choose to penalize anisotropic TV, whose proximal operator can be approximated in closed form [50]. Further, including an acceleration update improves the convergence of gradient-based reconstructions. As a result, the unrolled network can be constructed using fewer layers than its unaccelerated counterpart. This will reduce both computation time and training requirements.

Vii Conclusion

We have presented a general framework for incorporating the non-linearities of regularized reconstruction and known system physics to learn optimal experimental design. Here, we have applied this method to learn coded-illumination source designs for quantitative phase recovery. Our coded-illumination designs can improve the temporal resolution of the acquisition and enable real-time processing, while maintaining high accuracy. We demonstrated here that our learned designs achieve high-quality reconstructions experimentally without the need for retraining.

Funding Information

This work was supported by STROBE: A National Science Foundation Science & Technology Center under Grant No. DMR 1548924 and by the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through Grant GBMF4562 to Laura Waller (UC Berkeley). Laura Waller is a Chan Zuckerberg Biohub investigator. Michael R. Kellman is additionally supported by the National Science Foundation’s Graduate Research Fellowship under Grant No. DGE 1106400. Emrah Bostan’s research is supported by the Swiss National Science Foundation (SNSF) under grant P2ELP2 172278.


The authors would like to thank Professor Michael Lustig for his guidance and advice.


  • [1] G. Popescu, Quantitative Phase Imaging of Cells and Tissues.   McGraw Hill Professional, Mar 2011.
  • [2] M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, Quantitative Phase Imaging.   Elsevier Amsterdam, The Netherlands, Jul. 2012.
  • [3] B. Rappaz, B. Breton, E. Shaffer, and G. Turcatti, “Digital holographic microscopy: A quantitative label-free microscopy technique for phenotypic screening,” Combinatorial Chemistry & High Throughput Screening, vol. 17, no. 1, pp. 80–88, January 2014.
  • [4] E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Applied Optics, vol. 38, no. 34, pp. 6994–7001, Dec. 1999.
  • [5] B. Bhaduri, H. V. Pham, M. Mir, and G. Popescu, “Diffraction phase microscopy with white light,” vol. 37, no. 6, pp. 1094–1096, Mar. 2012.
  • [6] Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Optics Express, vol. 19, no. 2, pp. 1016–1026, Jan. 2011.
  • [7] T. E. Gureyev, A. Roberts, and K. A. Nugent, “Partially coherent fields, the transport-of-intensity equation, and phase uniqueness,” Journal of the Optical Society of America A, vol. 12, no. 9, pp. 1942–1946, Sep. 1995.
  • [8] N. Streibl, “Phase imaging by the transport equations of intensity,” Optics Communications, vol. 49, no. 1, pp. 6–10, Feb 1984.
  • [9] L. Waller, L. Tian, and G. Barbastathis, “Transport of intensity phase-amplitude imaging with higher order intensity derivatives,” Optics Express, vol. 18, no. 12, pp. 12 552–12 561, Jun. 2010.
  • [10] L. Tian, Z. Liu, L.-H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica, vol. 2, no. 10, pp. 904–908, Oct. 2015.
  • [11] G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nature Photonics, vol. 7, no. 9, pp. 739–745, Jul. 2013.
  • [12] G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and dark-field imaging by using a simple LED array,” Optics Letters, vol. 36, no. 20, pp. 3987–3989, Oct. 2011.
  • [13] L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomedical Optics Express, vol. 5, no. 7, pp. 1–14, Jun. 2014.
  • [14] L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica, vol. 2, no. 2, pp. 104–111, Feb. 2015.
  • [15] R. Ling, W. Tahir, H.-Y. Lin, H. Lee, and L. Tian, “High-throughput intensity diffraction tomography with a computational microscope,” Biomedical Optics Express, vol. 9, no. 5, pp. 2130–2141, Jan. 2018.
  • [16] Z. Phillips, R. Eckert, and L. Waller, “Quasi-dome: A self-calibrated high-NA LED illuminator for Fourier ptychography,” in Imaging Systems and Applications, Jun. 2017, pp. IW4E–5.
  • [17] Z. Liu, L. Tian, S. Liu, and L. Waller, “Real-time brightfield, darkfield, and phase contrast imaging in a light-emitting diode array microscope,” Journal of Biomedical Optics, vol. 19, no. 10, p. 106002, Oct. 2014.
  • [18] B. Kachar, “Asymmetric illumination contrast: a method of image formation for video light microscopy,” Science, vol. 227, no. 4688, pp. 766–768, Feb. 1985.
  • [19] D. K. Hamilton and C. J. R. Sheppard, “Differential phase contrast in scanning optical microscopy,” Journal of Microscopy, vol. 133, no. 1, pp. 27–39, Jan. 1984.
  • [20] S. B. Mehta and C. J. R. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Optics Letters, vol. 34, no. 13, pp. 1924–1926, Jul. 2009.
  • [21] L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Optics Express, vol. 23, no. 9, pp. 11 394–11 403, May 2015.
  • [22] R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Optics Express, vol. 23, no. 20, pp. 26 672–26 682, Oct. 2015.
  • [23] D. K. Hamilton, C. J. R. Sheppard, and T. Wilson, “Improved imaging of phase gradients in scanning optical microscopy,” Journal of Microscopy, vol. 135, no. 3, pp. 275–286, Sep. 1984.
  • [24] N. Streibl, “Three-dimensional imaging by a microscope,” Journal of the Optical Society of America A, vol. 2, no. 2, pp. 121–127, Feb. 1985.
  • [25] J. Li, Q. Chen, J. Zhang, Y. Zhang, L. Lu, and C. Zuo, “Efficient quantitative phase microscopy using programmable annular LED illumination,” Biomedical Optics Express, vol. 8, no. 10, pp. 4687–4705, Oct. 2017.
  • [26] Y.-Z. Lin, K.-Y. Huang, and Y. Luo, “Quantitative differential phase contrast imaging at high resolution with radially asymmetric illumination,” Optics Letters, vol. 43, no. 12, pp. 2973–2976, Jun. 2018.
  • [27] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015.
  • [28] K. H. Jin, M. T. McCann, e. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” no. 9, pp. 4509–4522, Jun. 2017.
  • [29] S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Apr. 2016, pp. 514–517.
  • [30] Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Science & Applications, vol. 7, no. 2, p. 17141, Feb. 2018.
  • [31] A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica, vol. 4, no. 9, pp. 1117–1125, Sep. 2017.
  • [32] Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica, vol. 4, no. 11, pp. 1437–1443, Nov. 2017.
  • [33] T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” arXiv preprint arXiv:1805.00334, pp. 1–15, Sep. 2018.
  • [34] B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machine-learning to optimize phase contrast in a low-cost cellphone microscope,” PLoS ONE, vol. 13, no. 3, pp. 1–20, Mar. 2018.
  • [35] R. Horstmeyer, R. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv preprint arXiv:1709.07223, pp. 1–13, Sep. 2017.
  • [36] A. Robey and V. Ganapati, “Optimal physical preprocessing for example-based super-resolution,” Optics Express, vol. 26, no. 24, pp. 31 333–31 350, Nov. 2018.
  • [37] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, Jun. 2010, pp. 399–406.
  • [38] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll, “Learning a variational network for reconstruction of accelerated MRI data,” Magnetic Resonance in Medicine, vol. 79, no. 6, pp. 3055–3071, Nov. 2017.
  • [39] S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041, pp. 1–11, May 2017.
  • [40] J. Sun, H. Li, Z. Xu et al., “Deep ADMM-Net for compressive sensing MRI,” in Advances in Neural Information Processing Systems, 2016, pp. 10–18.
  • [41] U. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” arXiv preprint arXiv:1512.04754s, pp. 1–9, Dec. 2015.
  • [42] E. Bostan, U. S. Kamilov, and L. Waller, “Learning-based image reconstruction via parallel proximal algorithm,” IEEE Signal Processing Letters, vol. 25, no. 7, pp. 989–993, May 2018.
  • [43] G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nature Photonics, vol. 7, no. 9, pp. 739–745, Jul. 2013.
  • [44] E. Bostan, U. S. Kamilov, M. Nilchian, and M. Unser, “Sparse stochastic processes and discretization of linear inverse problems,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2699–2710, Jul. 2013.
  • [45] N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and Trends® in Optimization, vol. 1, no. 3, pp. 127–239, Aug. 2014.
  • [46] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, Jan. 2009.
  • [47] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variation-based image restoration,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460–489, 2005.
  • [48] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, Aug. 2007.
  • [49] P. L. Combettes and J.-C. Pesquet, “Proximal splitting methods in signal processing,” in Fixed-point algorithms for inverse problems in science and engineering, 2011, pp. 185–212.
  • [50] U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 539–548, Dec. 2016.
  • [51] P. Marechal and J. Ye, “Optimizing condition numbers,” SIAM Journal on Optimization, vol. 20, no. 2, pp. 935–947, 2009.
  • [52] C. S. Wong and J. C. Masaro, “A-optimal design matrices,” Discrete Mathematics, vol. 50, pp. 295–318, 1984.
  • [53] X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Optics Express, vol. 22, no. 5, pp. 4960–4972, Mar. 2014.
  • [54] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, Dec. 2007.
  • [55] S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proceedings of the National Academy of Sciences, vol. 106, no. 9, pp. 2995–2999, Mar. 2009.