I Introduction
Quantitative Phase Imaging (QPI) enables stainfree and labelfree microscopy of transparent biological samples in vitro [1, 2]. When compared with coherent methods [3, 4], QPI methods that use partially coherent light achieve higher spatial resolution, more light throughput, and reduced speckle artifacts. Phase contrast may be generated using interference [5, 6] or defocus [7, 8, 9]. More recently, codedillumination microscopy [10, 11, 12, 13, 14, 15] has been demonstrated as an accurate and inexpensive QPI scheme. To realize codedillumination, we replace a commercial microscope’s illumination unit with a lightemitting diode (LED) domed array (see Fig. 1) [16]
. This provides a flexible hardware platform for various QPI applications including superresolution
[10, 11, 13], multicontrast [12, 17], and 3D imaging [14, 15].Codedillumination microscopy uses asymmetric source patterns [18] and multiple measurements to retrieve 2D phase information. Quantitative Differential Phase Contrast [19, 20, 21, 22] (qDPC), for example, captures four measurements with rotated halfcircle source patterns, from which the phase is computationally recovered using a partially coherent linearized model. The practical performance of qDPC is predominantly determined by how the phase information is encoded in (via codedillumination patterns) and decoded from (via phase recovery) the intensity measurements.
The halfcircle illumination designs of qDPC were derived analytically based on a Weak Object Approximation [23, 24, 20, 21] which linearizes the physics in order to make the inverse problem mathematically convenient. This linearized model enables one to derive a phase transfer function and analyze the spatial frequency coverage of any given source pattern [21, 22, 25, 26]; however, the nonlinearity of the exact model makes it impossible to predict an optimal source design without knowing the sample’s phase a priori. In addition, these types of analysis are inherently restricted to linear reconstruction algorithms and will not necessarily result in improved accuracy when the phase is retrieved via nonlinear iterative methods.
Motivated by the success of deep learning
[27] for image reconstruction problems [28, 29, 30, 31, 32, 33], datadriven approaches have been adopted for learning codedillumination patterns. For instance, researchers have used machine learning to maximize the phase contrast of each codedillumination measurement [34], to improve accuracy on classification tasks [35], and to reconstruct phase [36]. All of these techniques learn the inputoutput relationship with a deep convolutional neural network (CNN) using training data. It is not straightforward to include the wellcharacterized system physics; hence, the CNN is required to learn both the physical measurement formation and the phase reconstruction process. This task requires training of 10s to 100s of thousands of parameters and an immense number of training examples.
Here, we introduce a new datadriven approach to optimizing the source pattern design for codedillumination phase retrieval by directly including both the system physics and the nonlinear nature of a reconstruction algorithm in the learning process. Our approach unrolls the iterations of a generic nonlinear reconstruction algorithm to construct an unrolled network [37, 38, 39, 40, 41, 42]. Similar to CNNs, our unrolled network consists of several layers (one for each iteration); however, in our case each layer consists of wellspecified operations to incorporate measurement formation and sparse regularization, instead of standard operations such as generic convolutions. The key aspects of our approach are:

incorporation of the system physics and reconstruction nonlinearities in the illumination design process.

efficient parameterization of the unrolled network.

incorporation of practical constraints.

reduced number of training examples required.
We deploy our datadriven approach to learn improved codedillumination patterns for phase reconstruction. Each layer of the unrolled network is parameterized by only a few variables (LED brightness values), enabling an efficient use of training data ( simulated training examples). We compare the QPI performance of our learned designs to previous work and demonstrate that our designs generalize well to the experimental setting with biological samples.
Ii Quantitative Phase Imaging
qDPC recovers a sample’s complex transmittance function from several codedillumination measurements. The phase recovery optimization algorithm aims to minimize the Euclidean norm of the error between the measurements and the expected measurements formed with the current phase estimate. Using a gradientbased procedure, the phase estimate is iteratively updated until convergence. For a partially coherent source, the phase can be recovered with resolution up to twice the coherent diffraction limit. In this section, we describe the measurement formation process and phase recovery optimization.
Iia System Modelling
A thin sample’s transmission function can be approximated as a 2D complex function, , characterized by its absorption, , and phase, , where are 2D spatial coordinates, is the wavelength of the illumination, is the physical thickness of the sample, and is the change in refractive index from the background. Intensity measurements, , of the sample are a nonlinear function of , mathematically described by,
(1) 
where denotes squared absolute value, denotes convolution, denotes elementwise multiplication, is the illumination’s complexfield at the sample plane and is the point spread function (PSF) of the microscope. The illumination from each LED is approximated as a tilted plane wave, , with tilt angle, , determined by the physical position of the LED relative the microscope [43].
Because the measured image in Eq. 1 is nonlinear with respect to the sample’s transmission function, recovering phase generally requires nonconvex optimization. However, biological samples in closely indexmatched fluid have a small scatterscatter term. This means that a weak object approximation can be made; linearizing the measurement formation model such that phase recovery requires only a linear deconvolution of the measurements with their respective weak object transfer functions (WOTFs) [23, 20, 24, 22, 21]. Further, unstained biological samples are predominantly phase objects since they are only weakly absorbing (i.e. is small). With these approximations, we can express each intensity measurement as a linear system with contributions from the background and phase contrast. In Fourier space,
(2) 
where
denotes Fourier transform,
are 2D spatialfrequency coordinates, is the measurement’s background energy concentrated at the DC and is the phase WOTF. The phase WOTFs are a function of the illumination source and the pupil distribution of the microscope [21]. For a single LED the WOTF is:(3) 
where is the correlation operator, defined as for in the domain of and .
In [21], multiple LEDs are turned on simultaneously to increase signaltonoise (SNR) and improve phase contrast. Because the fields generated by each LED’s illumination are spatially incoherent with each other, the measurement from multiple LEDs will simply be the weighted sum of each LED’s individual measurement, where the weights correspond to the LEDs’ brightness values. The phase WOTF for illumination by multiple LEDs will also be the weighted sum of the singleLED phase WOTFs. Mathematically,
(4)  
(5) 
where is the set of LEDs turned on and are the LEDs’ brightness values.
Following common practice [44]
, we discretize the 2D spatial distributions and format them as vectors (bold lower case) (
e.g. represents the transfer function’s 2D spatialfrequency distribution and represents the 2D spatial phase distribution). The measurements^{1}^{1}1In practice, typically refers to the socalled flattened image, where the background energy in (2) is removed via background subtraction. are described in Fourier space as with system function .Based on this model, we define as the Fourier transform of single LED measurements, , along the columns. Then, is defined as the singleLED weights for each of measurements, and is the column of . The product simulates the multipleLED measurement. Similarly, we define as single LED phase WOTFs, along the columns, such that the product gives the corresponding multipleLED phase WOTF for the measurement.
IiB Phase Recovery
Phase recovery using the forward model in Sec. IIA can be formulated as a regularized linear inverse problem,
(6)  
(7) 
where is the recovered phase, is the number of measurements acquired, is the Fourier transform of the measurement and is a userchosen regularizer. We solve this optimization problem efficiently using the accelerated proximal gradient descent (APGD) algorithm by iteratively applying an acceleration update, a gradient update and a proximal update [45, 46]. The algorithm is detailed in Alg. 1, where is the gradient step size, is the number of iterations, and are intermediate variables, is the acceleration parameter derived by the recursion, [46], and is the proximal operator corresponding to the userchosen regularizer [45].
Iii PhysicsBased Learned Design
Given the phase recovery algorithm in Sec. IIB, we now describe our main contribution of learning the codedillumination designs for a given reconstruction algorithm and training set.
Iiia Unrolled Physicsbased Network
Traditionally, DNNs contain many layers of weighted linear mixtures and nonlinear activation functions
[27]. Here, we consider specific linear functions which capture the system physics of measurement formation and specific nonlinear activation functions which promote sparsity. Starting from Alg. 1, we treat each iteration as a layer such that when unrolled they form a network of layers, denoted (Fig. 2). Each layer of contains a module for each of the iterative algorithm’s updates (i.e. an acceleration module, a gradient module (incorporates system physics), and a proximal module (incorporates sparsity)). The regularization and step size parameters specified for Alg. 1 are fixed. The network’s inputs comprise and the network’s output is . The design parameters of the network, which will be learned, govern the relative brightness of the LEDs and are incorporated in the measurement formation and the system WOTFs.IiiB Learning Objective
Our learning objective is to minimize the phase reconstruction error of the training data over the space of possible LED configurations, subject to constraints that enforce physical feasibility and eliminate degenerate and trivial solutions:
(8)  
s.t.  (nonnegativity)  (9)  
(scale)  (10)  
(geometric)  (11)  
where,
(12)  
(13) 
Here, are training pairs for which is a matrix of the Fourier transform of singleLED measurements for the sample with optical phase, . is the elementwise product operator, is a geometric constraint mask for the measurement, and is the null vector.
The nonnegativity constraint (Eq. 9) prevents nonphysical solutions by enforcing the brightness of each LED to be greater than or equal to zero. This is enforced by projecting the parameters onto the set of nonnegative real numbers. The scale constraint (Eq. 10) enforces that each codedillumination design must have weights with sum equal to 1, in order to eliminate arbitrary scalings of the same design. This is enforced by scaling the parameters for each measurement such that their sum is one. The geometric constraint (Eq. 11) enforces that the codedillumination designs do not use conjugatesymmetric LED pairs to illuminate the sample within the same measurement, since these would also result in degenerate solutions (e.g. two symmetric LEDs produce opposite phase contrast measurements that would cancel each other out). To prevent this, we force the source patterns for each measurement to reside within only one of the major semicircle sets (e.g. top, bottom, left, right). This constraint is enforced by setting the LED brightnesses outside the allowed semicircle to zero.
We solve Eq. 8 iteratively via accelerated projected gradient descent (Alg. 2). At each iteration, the codedillumination design for each measurement is updated with the analytical gradient, projected onto the constraints (denoted by ) and updated again with a contribution from the previous iteration (weighted by ). enforces the constraints in the following order: nonnegativity, geometric, and scale.
IiiC Gradient Update
The gradient of the loss function (Eq. 8) with respect to the design parameters has contributions at every layer of the unrolled network through both the measurement terms, , and the phase WOTF terms, , for each measurement . Here, we outline our algorithm for updating the codedillumination design weights via a twostep procedure: backpropagating the error from layertolayer and computing each layer’s gradient contribution. For simplicity, we outline the gradient update for only a single training example, , as the gradient for all the training examples is the sum of their individual gradients.
Unlike pure gradient descent, where each iteration’s estimate only depends on the previous’, accelerated methods like Alg. 1 linearly combine the previous two iteration’s estimates to improve convergence. As a consequence, backpropagating error from layertolayer requires contributions from two successive layers. Specifically, we compute the error at all layers with the recursive relation,
(14) 
where each partial gradient constitutes a single step in Alg. 1 (fully derived in the supplement).
With the backpropagated error at each layer, we compute the gradient of the loss function with respect to as,
(15) 
for which,
(16) 
Here, backpropagates the error through the proximal operator and other partials with respect to relate the backpropagated error at each layer to the changes in . Derivations of these partial gradients are included in the supplementary material. In Alg. 3, we unite these two steps to form a recursive algorithm which efficiently computes the analytic gradient for a single training example. Alternatively, general purpose autodifferentiation included in learning libraries (e.g.PyTorch, TensorFlow) can be used to perform the gradient updates.
Iv Results
Our proposed method learns the codedillumination design for a given reconstruction and training set (Fig. 3b), yet up to this point we have not detailed specific parameters of our phase reconstruction. In our results, we set the parameters of our reconstruction algorithm (Alg. 1) to have a fixed CPU time by fixing the number of iterations at and the step size to (see supplement for parameter analysis). In addition, the regularization term, , has been defined generally (e.g. penalty, total variation (TV) penalty [47], BM3D [48]). Here, we choose to enforce TVbased sparsity:
(17) 
where is set to trade off the TV cost with the data consistency cost and is the firstorder difference operator along the image dimension. We efficiently implement the proximal operator of Eq. 17 in closed form via parallel proximal method [42, 49, 50] (details in supplement).
Iva Learning
To train our codedillumination design parameters using Alg. 2, we generate a dataset of 100 examples (90 for training, 10 for testing). Each example contains ground truth phase from a small region () of a larger image and simulated single LED measurements (using Eq. 1). The LEDs are uniformly spaced within a circle such that each singleLED intensity measurement is a brightfield measurement. The physical system parameters used to generate the phase WOTFs and simulate the training data measurements are , , , and . To train, we use cost between reconstructed phase and ground truth phase as our loss function and approximate the full gradient of Eq. 8 with a batch gradient from random batches of of the training pairs at each iteration. We use a learning rate of (training and testing convergence curves are provided in the supplement). The training is performed on a multicore CPU (Dualsocket Intel Xeon® E5 Processor @ 2.1GHz with cores and GB of RAM) and batch updates are computed in parallel with each training example on a single core. Each batch update takes 6 seconds. 200 updates are performed, resulting in a total training time of 20 minutes.
# Meas.  Random  Traditional  Annular  Cond. Number  Aoptimal  Physicsbased 

Illumination  qDPC  Illumination  Optimization  Design  Learned Design  
4  12.30 2.12  15.67 2.19  20.40 2.09  20.37 2.41  17.94 2.54  28.46 2.50 
3  12.33 2.12  15.28 2.18  20.44 2.26  19.33 2.03  18.05 2.59  28.04 2.59 
2  12.25 2.12  14.87 2.23  20.21 2.24  17.19 2.28  18.08 2.64  23.73 2.18 
PSNR Results: Average and standard deviation PSNR (dB) of phase reconstructions from the simulated testing examples using different illumination schemes and different numbers of measurements. Factor format: Mean
Std.Traditional qDPC uses 4 measurements to adequately cover frequency space. Our learned designs are more efficient and may require fewer measurements; hence, we show learned designs for the cases of 4, 3 and 2 measurements. The designs and their corresponding phase WOTFs are shown in Fig. 3.
Comparing our learned designs with previous work, Fig. 4 shows the phase reconstruction for a single simulated test example using 4, 3 and 2 measurements. The ground truth phase is compared with the phase reconstructed using traditional qDPC designs [21], annular illumination designs [21], condition number optimized designs [51], Aoptimal designs [52], and our physicsbased learned designs. Table I reports the peak SNR (PSNR) statistics (mean and standard deviation) for the phase reconstructions from evaluated on our set of testing examples. Our learned designs give significant improvement, recovering both the high and low frequencies more accurately.
IvB Experimental Validation
To demonstrate that our learned designs generalize well in the experimental setting, we implement our method on an LED array microscope. A commercial Nikon TE300 microscope is equipped with a custom quasiDome [16] illumination system (581 programmable RGB LEDs: nm, nm, nm) and a PCO.edge 5.5 monochrome camera (, pixel pitch, 16 bit). We image two samples: a USAF phase target (Benchmark Technologies) and fixed 3T3 mouse fibroblast cells (prepared as detailed in the supplement). In order to validate our method, we compare results against phase experimentally estimated via pupilcorrected Fourier Ptychography (FP) [43, 53, 13] with equivalent resolution. FP is expected to have good accuracy, since it uses significantly more measurements (69 singleLED measurements) and a nonlinear reconstruction process.
Using the USAF target, we compare phase reconstructions from FP with traditional qDPC and our learned design measurements (Fig. 5). Traditional qDPC reconstructions consistently underestimate the phase values. However, phase reconstructions using our learned design measurements are similar to phase estimated with FP. As the number of measurements is reduced, the performance quality of the reconstruction using traditional qDPC degrades, while the reconstruction using the learned design remains accurate.
To demonstrate our method with live biological samples, we repeated the experiments with 3T3 mouse fibroblast cells. Figure 6 shows that phase reconstructions from traditional qDPC again consistently underestimate phase values, while phase reconstructions using learned design measurements match the phase estimated with FP well.
V Discussion
Our proposed experimental design method efficiently learns the codedillumination designs by incorporating both the system physics and the nonlinear nature of iterative phase recovery. Learned designs with only 2 measurements can efficiently reconstruct phase with quality similar to Fourier Ptychography ( measurements) and better than qDPC ( measurements), giving an improvement in temporal resolution by a factor of 2 over traditional qDPC and far fewer than FP. Additionally, we demonstrate (Table I) that the performance of our designs on a set of testing examples is superior to previouslyproposed codedillumination designs. Visually, our learned design reconstructions closely resemble the ground truth phase, with both lowfrequency and highfrequency information accurately recovered.
By parameterizing our learning problem with only a few weights per measurement, our method can efficiently learn an experimental design with a small simulated dataset. This enables fast training and reduces computing requirements significantly. Obtaining large experimental datasets for training may be difficult in microscopy, so it is important that our method can be trained on simulated data only. Experimental results in Sec. IVB show similar quality to simulated results, with both using the designs learned from simulated data only.
Finally, phase recovery with the learned designs’ measurements are trained with a given number of reconstruction iterations (e.g. determined by a CPU budget). This makes our method particularly wellsuited for realtime processing. qDPC can also be implemented in realtime, but limiting the compute time for the inverse problem (by restricting the number of iterations) limits convergence and causes lowfrequency artifacts. Our learned designs incorporate the number of iterations (and hence processing time) into the design process, producing highquality phase reconstructions within a reasonable compute time.
Vi Outlook
Our method is general to the problem of experimental design. Similar to QPI, many fields (e.g. Magnetic resonance imaging (MRI), fluorescence microscopy) use physicsbased nonlinear iterative reconstruction techniques to achieve stateoftheart performance. With the correct model parameterization and physicallyrelevant constraints, our method could be applied to learn optimal design for these applications (e.g. undersampling patterns for compressed sensing MRI [54], PSFs for fluorescence microscopy [55]).
Requirements for applying our method are simple: the reconstruction algorithm’s updates must be differentiable (e.g. gradient update and proximal update) so that analytic gradients of the learning loss can be computed with respect to the design parameters. Of practical importance, the proximal operator of the regularizer should be chosen so that it has a closed form. While this is not a strict requirement, if the operator itself requires an additional iterative optimization, error will have to be backpropagated through an excessive number of iterations. Here, we choose to penalize anisotropic TV, whose proximal operator can be approximated in closed form [50]. Further, including an acceleration update improves the convergence of gradientbased reconstructions. As a result, the unrolled network can be constructed using fewer layers than its unaccelerated counterpart. This will reduce both computation time and training requirements.
Vii Conclusion
We have presented a general framework for incorporating the nonlinearities of regularized reconstruction and known system physics to learn optimal experimental design. Here, we have applied this method to learn codedillumination source designs for quantitative phase recovery. Our codedillumination designs can improve the temporal resolution of the acquisition and enable realtime processing, while maintaining high accuracy. We demonstrated here that our learned designs achieve highquality reconstructions experimentally without the need for retraining.
Funding Information
This work was supported by STROBE: A National Science Foundation Science & Technology Center under Grant No. DMR 1548924 and by the Gordon and Betty Moore Foundation’s DataDriven Discovery Initiative through Grant GBMF4562 to Laura Waller (UC Berkeley). Laura Waller is a Chan Zuckerberg Biohub investigator. Michael R. Kellman is additionally supported by the National Science Foundation’s Graduate Research Fellowship under Grant No. DGE 1106400. Emrah Bostan’s research is supported by the Swiss National Science Foundation (SNSF) under grant P2ELP2 172278.
Acknowledgment
The authors would like to thank Professor Michael Lustig for his guidance and advice.
References
 [1] G. Popescu, Quantitative Phase Imaging of Cells and Tissues. McGraw Hill Professional, Mar 2011.
 [2] M. Mir, B. Bhaduri, R. Wang, R. Zhu, and G. Popescu, Quantitative Phase Imaging. Elsevier Amsterdam, The Netherlands, Jul. 2012.
 [3] B. Rappaz, B. Breton, E. Shaffer, and G. Turcatti, “Digital holographic microscopy: A quantitative labelfree microscopy technique for phenotypic screening,” Combinatorial Chemistry & High Throughput Screening, vol. 17, no. 1, pp. 80–88, January 2014.
 [4] E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitudecontrast and quantitative phasecontrast microscopy by numerical reconstruction of Fresnel offaxis holograms,” Applied Optics, vol. 38, no. 34, pp. 6994–7001, Dec. 1999.
 [5] B. Bhaduri, H. V. Pham, M. Mir, and G. Popescu, “Diffraction phase microscopy with white light,” vol. 37, no. 6, pp. 1094–1096, Mar. 2012.
 [6] Z. Wang, L. Millet, M. Mir, H. Ding, S. Unarunotai, J. Rogers, M. U. Gillette, and G. Popescu, “Spatial light interference microscopy (SLIM),” Optics Express, vol. 19, no. 2, pp. 1016–1026, Jan. 2011.
 [7] T. E. Gureyev, A. Roberts, and K. A. Nugent, “Partially coherent fields, the transportofintensity equation, and phase uniqueness,” Journal of the Optical Society of America A, vol. 12, no. 9, pp. 1942–1946, Sep. 1995.
 [8] N. Streibl, “Phase imaging by the transport equations of intensity,” Optics Communications, vol. 49, no. 1, pp. 6–10, Feb 1984.
 [9] L. Waller, L. Tian, and G. Barbastathis, “Transport of intensity phaseamplitude imaging with higher order intensity derivatives,” Optics Express, vol. 18, no. 12, pp. 12 552–12 561, Jun. 2010.
 [10] L. Tian, Z. Liu, L.H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for highspeed in vitro Fourier ptychographic microscopy,” Optica, vol. 2, no. 10, pp. 904–908, Oct. 2015.
 [11] G. Zheng, R. Horstmeyer, and C. Yang, “Widefield, highresolution Fourier ptychographic microscopy,” Nature Photonics, vol. 7, no. 9, pp. 739–745, Jul. 2013.
 [12] G. Zheng, C. Kolner, and C. Yang, “Microscopy refocusing and darkfield imaging by using a simple LED array,” Optics Letters, vol. 36, no. 20, pp. 3987–3989, Oct. 2011.
 [13] L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomedical Optics Express, vol. 5, no. 7, pp. 1–14, Jun. 2014.
 [14] L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica, vol. 2, no. 2, pp. 104–111, Feb. 2015.
 [15] R. Ling, W. Tahir, H.Y. Lin, H. Lee, and L. Tian, “Highthroughput intensity diffraction tomography with a computational microscope,” Biomedical Optics Express, vol. 9, no. 5, pp. 2130–2141, Jan. 2018.
 [16] Z. Phillips, R. Eckert, and L. Waller, “Quasidome: A selfcalibrated highNA LED illuminator for Fourier ptychography,” in Imaging Systems and Applications, Jun. 2017, pp. IW4E–5.
 [17] Z. Liu, L. Tian, S. Liu, and L. Waller, “Realtime brightfield, darkfield, and phase contrast imaging in a lightemitting diode array microscope,” Journal of Biomedical Optics, vol. 19, no. 10, p. 106002, Oct. 2014.
 [18] B. Kachar, “Asymmetric illumination contrast: a method of image formation for video light microscopy,” Science, vol. 227, no. 4688, pp. 766–768, Feb. 1985.
 [19] D. K. Hamilton and C. J. R. Sheppard, “Differential phase contrast in scanning optical microscopy,” Journal of Microscopy, vol. 133, no. 1, pp. 27–39, Jan. 1984.
 [20] S. B. Mehta and C. J. R. Sheppard, “Quantitative phasegradient imaging at high resolution with asymmetric illuminationbased differential phase contrast,” Optics Letters, vol. 34, no. 13, pp. 1924–1926, Jul. 2009.
 [21] L. Tian and L. Waller, “Quantitative differential phase contrast imaging in an LED array microscope,” Optics Express, vol. 23, no. 9, pp. 11 394–11 403, May 2015.
 [22] R. A. Claus, P. P. Naulleau, A. R. Neureuther, and L. Waller, “Quantitative phase retrieval with arbitrary pupil and illumination,” Optics Express, vol. 23, no. 20, pp. 26 672–26 682, Oct. 2015.
 [23] D. K. Hamilton, C. J. R. Sheppard, and T. Wilson, “Improved imaging of phase gradients in scanning optical microscopy,” Journal of Microscopy, vol. 135, no. 3, pp. 275–286, Sep. 1984.
 [24] N. Streibl, “Threedimensional imaging by a microscope,” Journal of the Optical Society of America A, vol. 2, no. 2, pp. 121–127, Feb. 1985.
 [25] J. Li, Q. Chen, J. Zhang, Y. Zhang, L. Lu, and C. Zuo, “Efficient quantitative phase microscopy using programmable annular LED illumination,” Biomedical Optics Express, vol. 8, no. 10, pp. 4687–4705, Oct. 2017.
 [26] Y.Z. Lin, K.Y. Huang, and Y. Luo, “Quantitative differential phase contrast imaging at high resolution with radially asymmetric illumination,” Optics Letters, vol. 43, no. 12, pp. 2973–2976, Jun. 2018.
 [27] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015.
 [28] K. H. Jin, M. T. McCann, e. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” no. 9, pp. 4509–4522, Jun. 2017.
 [29] S. Wang, Z. Su, L. Ying, X. Peng, S. Zhu, F. Liang, D. Feng, and D. Liang, “Accelerating magnetic resonance imaging via deep learning,” in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), Apr. 2016, pp. 514–517.
 [30] Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Science & Applications, vol. 7, no. 2, p. 17141, Feb. 2018.
 [31] A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica, vol. 4, no. 9, pp. 1117–1125, Sep. 2017.
 [32] Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica, vol. 4, no. 11, pp. 1437–1443, Nov. 2017.
 [33] T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach for Fourier ptychography microscopy,” arXiv preprint arXiv:1805.00334, pp. 1–15, Sep. 2018.
 [34] B. Diederich, R. Wartmann, H. Schadwinkel, and R. Heintzmann, “Using machinelearning to optimize phase contrast in a lowcost cellphone microscope,” PLoS ONE, vol. 13, no. 3, pp. 1–20, Mar. 2018.
 [35] R. Horstmeyer, R. Chen, B. Kappes, and B. Judkewitz, “Convolutional neural networks that teach microscopes how to image,” arXiv preprint arXiv:1709.07223, pp. 1–13, Sep. 2017.
 [36] A. Robey and V. Ganapati, “Optimal physical preprocessing for examplebased superresolution,” Optics Express, vol. 26, no. 24, pp. 31 333–31 350, Nov. 2018.
 [37] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proceedings of the 27th International Conference on International Conference on Machine Learning, Jun. 2010, pp. 399–406.
 [38] K. Hammernik, T. Klatzer, E. Kobler, M. P. Recht, D. K. Sodickson, T. Pock, and F. Knoll, “Learning a variational network for reconstruction of accelerated MRI data,” Magnetic Resonance in Medicine, vol. 79, no. 6, pp. 3055–3071, Nov. 2017.
 [39] S. Diamond, V. Sitzmann, F. Heide, and G. Wetzstein, “Unrolled optimization with deep priors,” arXiv preprint arXiv:1705.08041, pp. 1–11, May 2017.
 [40] J. Sun, H. Li, Z. Xu et al., “Deep ADMMNet for compressive sensing MRI,” in Advances in Neural Information Processing Systems, 2016, pp. 10–18.
 [41] U. Kamilov and H. Mansour, “Learning optimal nonlinearities for iterative thresholding algorithms,” arXiv preprint arXiv:1512.04754s, pp. 1–9, Dec. 2015.
 [42] E. Bostan, U. S. Kamilov, and L. Waller, “Learningbased image reconstruction via parallel proximal algorithm,” IEEE Signal Processing Letters, vol. 25, no. 7, pp. 989–993, May 2018.
 [43] G. Zheng, R. Horstmeyer, and C. Yang, “Widefield, highresolution Fourier ptychographic microscopy,” Nature Photonics, vol. 7, no. 9, pp. 739–745, Jul. 2013.
 [44] E. Bostan, U. S. Kamilov, M. Nilchian, and M. Unser, “Sparse stochastic processes and discretization of linear inverse problems,” IEEE Transactions on Image Processing, vol. 22, no. 7, pp. 2699–2710, Jul. 2013.
 [45] N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and Trends® in Optimization, vol. 1, no. 3, pp. 127–239, Aug. 2014.
 [46] A. Beck and M. Teboulle, “A fast iterative shrinkagethresholding algorithm for linear inverse problems,” SIAM Journal on Imaging Sciences, vol. 2, no. 1, pp. 183–202, Jan. 2009.
 [47] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin, “An iterative regularization method for total variationbased image restoration,” Multiscale Modeling & Simulation, vol. 4, no. 2, pp. 460–489, 2005.
 [48] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transformdomain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, Aug. 2007.
 [49] P. L. Combettes and J.C. Pesquet, “Proximal splitting methods in signal processing,” in Fixedpoint algorithms for inverse problems in science and engineering, 2011, pp. 185–212.
 [50] U. S. Kamilov, “A parallel proximal algorithm for anisotropic total variation minimization,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 539–548, Dec. 2016.
 [51] P. Marechal and J. Ye, “Optimizing condition numbers,” SIAM Journal on Optimization, vol. 20, no. 2, pp. 935–947, 2009.
 [52] C. S. Wong and J. C. Masaro, “Aoptimal design matrices,” Discrete Mathematics, vol. 50, pp. 295–318, 1984.
 [53] X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Optics Express, vol. 22, no. 5, pp. 4960–4972, Mar. 2014.
 [54] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: The application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, Dec. 2007.
 [55] S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Threedimensional, singlemolecule fluorescence imaging beyond the diffraction limit by using a doublehelix point spread function,” Proceedings of the National Academy of Sciences, vol. 106, no. 9, pp. 2995–2999, Mar. 2009.
Comments
There are no comments yet.