1 Introduction
The Abbe diffraction limit was considered to be the fundamental limit for spatial resolution of an optical microscope for more than a hundred years. In the last decade, novel techniques have circumvented this limit in order to achieve superresolution
[1, 2, 3, 4, 5, 6, 7]. Structured illumination microscopy (SIM) [1, 2, 3, 4], for example, uses illumination by multiple structured patterns to downmodulate high spatial frequency information of the object into the lowfrequency region, which can then pass through the bandwidth of the microscope’s optical transfer function (OTF) and be captured by the sensor. The reconstruction algorithm for SIM combines demodulation process which brings the high spatial frequency information back to its original position and synthetic aperture that extends the support of the effective OTF. Various structured patterns have been used to realize SIM: periodic gratings [1, 2, 3, 4], a single focal spot (confocal microscope) [8, 9], multifocal spots [10, 11, 12, 13] and random speckles [14, 15, 16, 17, 18, 19, 20, 13, 21, 22]. When the illumination patterns themselves are diffractionlimited, linear SIM is restricted to the bandwidth of a widefield microscope [4], allowing up to resolution enhancement (metrics explained in Sec. 3).In practice, structured illumination systems are sensitive to aberrations and experimental errors. To avoid reconstruction artifacts that degrade resolution, the patterns that are projected onto the sample must be known accurately. Periodic grating patterns can be parameterized by their contrast, period and phase angle, which may be estimated in the postprocessing [23, 24, 25, 26]. For multifocal patterns, the location of each focal spot is required [10]. For random speckle patterns, the relative shifts of the patterns are needed [18, 19]. Even with careful calibration and highquality optics, distortions caused by the sample may degrade the result.
To alleviate some of the experimental challenges, blind SIM was proposed, enabling SIM reconstruction without many priors [16, 27, 17, 28, 21, 22]. The only assumption is that the sum of all illumination patterns is uniform. Optimizationbased algorithms have been adopted, including iterative least squares with positivity and equality constraints [16, 27, 21], joint support recovery [17] and sparsity constraints [22]. However, these algorithms are sensitive to parameter tuning and may show low contrast in reconstructing high spatial frequencies [16]. Another algorithm, speckle superresolution optical fluctuation imaging (SSOFI) realizes SOFI [29] by first projecting random speckle patterns onto the object, and then using the statistical properties of the speckle patterns as a prior to reconstruct a highresolution image [20]. SSOFI is experimentally simple and robust; however it only achieves a resolution enhancement instead of for conventional SIM techniques (as compared to a widefield microscope).
In this paper, we propose a new reconstruction algorithm for SIM that is applicable to any illumination patterns. Our method, termed pattern estimation structured illumination microscopy with a statistical prior (PESIMS), is as robust and insensitive to parameter tuning as SSOFI, and achieves better resolution enhancement (up to ). Like blind SIM, the patterns need not be known (except for a requirement on the covariance of the patterns). We demonstrate our method using simulated and experimental results with both speckle and multifocal patterns. We discuss pattern design strategies to reduce the amount of data required and demonstrate an extension that uses pixel reassignment [30, 31, 32, 33, 34] to improve the reconstruction quality.
2 Theory and Method
Our algorithm takes in a SIM dataset consisting of multiple images captured under different structured illumination patterns (e.g. random speckles, multifocal spots). We reconstruct the superresolved image in two parts. The first part is an iterative optimization procedure for estimating each illumination pattern based on an approximated object. The second part reconstructs the highresolution image using the estimated patterns and the measured images, along with a statistical prior. Before introducing these two parts, we start by defining the SIM forward model.
2.1 Forward model of structured illumination microscopy
A representative experimental setup is shown in Fig. 1. A DMD spatial light modulator (SLM) is used to project patterns onto the object through an objective lens. The measured intensity for the th captured image is the product of the object’s fluorescence distribution with the illumination pattern , where denotes the lateral position coordinates. This product is then convolved with the system’s incoherent detectionside point spread function (PSF), :
(1) 
2.2 Part 1: Pattern estimation
The first part of our inverse algorithm is to estimate the illumination patterns. To do so, we start with an lowresolution approximation of the object. Then, we use this object and our measured images to iteratively estimate the patterns (see Fig. 2).
Part 1a: Approximate widefield image
If we already knew the object , it would be straightforward to estimate the pattern for each measured image by dividing out the object from each of the measurements. However, the object is unknown. Hence, we start by making a rough estimate of the object. We first take the mean of all the measured images:
(2) 
where is the mean operation with respect to , and is approximately a constant over the entire field of view. The resulting image will be equivalent to the lowresolution widefield image if the sum of all illumination patterns is approximately uniform.
Part 1b: Deconvolve widefield image
Since the widefield image represents the convolution of the object with its PSF, we can perform a deconvolution operation to estimate the lowresolution object:
(3) 
where and
denote the Fourier transform and its inverse, respectively,
denotes the Fourier transform of a certain function, are the lateral spatial frequency coordinates and is a small Tikhonov regularization constant. Note that this object estimate has diffractionlimited resolution and will be used only for estimating the illumination patterns.Part 1c: Pattern estimation
We then use the lowresolution object estimate to recover each of the illumination patterns. Since each image is simply the product of the illumination and object, we could divide each image by the estimated object to get the pattern. However, we instead solve the problem as an optimization procedure in order to impose the correct Fourier support constraint and avoid reconstruction artifacts. The th pattern estimate is the solution to the following problem
(4) 
where is the wavelength of the excitation light. The first term of the cost function, , in Eq. (4) is the least square error (residual) between the measured intensity and the predicted intensity based on our current estimate. The second term enforces a frequency support constraint for the illumination pattern via an indicator function . This is important to reduce artifacts in the pattern estimation because a normal division between the measured image and estimated object will create errors outside of this frequency support. In our epiillumination geometry, the constraint is that the frequency content of each illumination pattern be confined within the OTF defined by the objective’s NA.
We implement a proximal gradient descent algorithm [35], summarized in Subroutine 1. Proximal gradient descent is designed to solve convex optimization problems like ours that have two cost function terms: one being a differentiable cost function term (e.g. the residual) and the other being a constraint or regularization term (usually nondifferentiable). When the constraint is defined by an indicator function, as in Eq. (4), the method is also known as a projected gradient method.
To implement, we first compute the gradient of the differentiable cost function term with respect to
(5) 
where denotes evaluation of the gradient using the pattern at the th iteration.
We define the projection operation to force the information outside of the OTF to be zero at each iteration. To reduce highfrequency artifacts, the following softedge filter is used
(6) 
where is the system’s illuminationside PSF, and determines the amount of highfrequency information that is suppressed in the pattern estimation step. We repeat this process of updates and projections until convergence (typically 50 iterations to estimate each pattern).
The convergence speed for proximal gradient descent is on the order of [35], indicating that the residual between the current and optimal cost functions is inversely proportional to the number of iterations . To accelerate convergence, one extra step is conducted in Subroutine 1 to include the information of the previous estimate [36, 37]. The convergence rate for this accelerated proximal gradient method, [37], is significantly faster than the normal proximal gradient method.
2.3 Part 2: SIM with a statistical prior
Once we have recovered the illumination patterns, the second part of the algorithm is to reconstruct a highresolution image from the measured dataset and the estimated patterns . We call this part of the algorithm Structured Illumination Microscopy with a Statistical prior (SIMS), summarized in Fig. 3. There are four steps, which are explained below. We will also describe how the statistical prior is used and why this procedure gives better resolution.
Part 2a: Calculate the patternintensity covariance
Consider the case where the pattern
is a random variable at position
and the measured intensity is also a random variable at position . The th image is thus the th sample function for these random variables (one event out of the sample space). Covariance is a measure of how much two random variables change together. Since the intensity is the blurred version of the product between random patterns and deterministic object (Eq. (1)), the covariance between the pattern and the intensity should give high similarity wherever the object has signal and thus allow us to find the object underneath the randompattern illumination [38, 39, 40, 14, 41]. We calculate this covariance image as(7) 
where , and .
Regardless of which illumination pattern is imposed, the covariance image always gives an estimate of the object. However, the resolution of the reconstructed object may be different for different pattern statistics. We can quantify this by taking a closer look at the expression on the righthand side of Eq. (7). The multiplication of detection PSF and covariance between and acts as the PSF of the covariance image, which thus determines resolution. If the patterns are perfectly spatially correlated, the patternpattern covariance is a constant, and the patternintensity covariance image is a normal widefield image with PSF of . If the patterns are perfectly spatially uncorrelated, the patternpattern covariance is
, which, for a constant variance, results in the PSF being a delta function and the object being reconstructed with perfect resolution. In practice, this is not achievable, since the illumination is bandlimited and thus cannot be perfectly uncorrelated. In the general case, to find the resolution (PSF) of the covariance image, we need to calculate the spatial covariance of the patterns, which is the subject of Part 2b, below.
Part 2b: Calculate patternpattern covariance
To calculate the spatial covariance of the projected patterns, we first consider the pattern formation model. In our experiments, for example, we use a DMD to create random patterns at the sample plane. Assuming that the projected DMD pattern is sparse enough to avoid interference crossterms, we can express our pattern under the incoherent model as
(8) 
where is the th pattern on the DMD. With this model, the patternpattern covariance is
(9) 
where we have used an assumption that the DMD pattern values at position and are perfectly uncorrelated:
(10) 
with being a constant that maintains unit consistency. This assumption is valid because the effective DMD pixel size is small compared to the FWHM of the optical system and we can control to create an uncorrelated pattern. In the experiment, each position of is an independent and identically distributed random variable. When the number of patterns is large enough, the variance approaches the same constant for all the positions. We can then combine and the variance into a single constant .
Ideally, we can assume when , where is the wavelength of the fluorescent emission detection light, and theoretically calculate the patternpattern covariance. We can also estimate by numerically evaluating Eq. (9) using our estimated patterns, which accounts for possible aberrations in the illumination optics.
Part 2c: PSF deconvolution of the covariance image
The patternpattern covariance derived in Part 2b is related to the PSF of the patternintensity covariance calculated in Part 2a. Hence, we can plug the patternpattern covariance into Eq. (7) and write the covariance image as
(11) 
Importantly, the effective PSF for this correlation image is now , and the corresponding effective OTF is . Since both and have approximately the same Fourier support as the widefield OTF, the convolution between them covers around the support of the widefield OTF, as in conventional SIM. Given the effective PSF, we implement a standard deconvolution to improve contrast at high spatial frequencies:
(12) 
where and is a small regularization parameter.
Part 2d: Shading correction operation
When the number of images is not large enough to give uniform variance of the patterns at each pixel ( from Eq. (9)), lowfrequency shading artifacts will occur. Even if we assume the mean of the pattern to be flat in Eq. (2), the variance can still be nonuniform. These can be seen in the deconvolved covariance image in Fig. 3. To resolve this, we can estimate and correct for the variance across the image using our previously estimated projected patterns. Since the projected pattern is the blurred version of the pattern on the DMD, by ignoring the highfrequency component of the DMD pattern, we can approximate the variance of the DMD pattern by
(13) 
We divide out the spatiallyvarying variance in Eq. (11) from the deconvolved SIMS image,
(14) 
where is a regularizer and is the output from our SIMS reconstruction (Part 2c). This result of this step is our final reconstruction of the highresolution object function.
2.4 Parameter Tuning and Algorithm Runtime
Our SIMS algorithm involves 4 regularizers: , , , and , described in Eq. (3), Eq. (6), Eq. (12), and Eq. (14), respectively. Each is decoupled from the others and acts similarly to a typical Tikhonov regularizer, so tuning may be done independently. Generally, we want the regularizers to be as small as possible, while still avoiding noise amplification.
The procedure to tune the regularization parameters heuristically is summarized as follows. First, we check if the widefield images are welldeconvolved by finding the smallest
to give the image with best resolution but without obvious noise amplification, then we move on to check the deconvolved covariance image by tuning the SIMS regularizer and the smoothedge filter regularizer using the same principle, and finally we check the final reconstruction by using the smallest shading correction regularizer with enough shading correction but without evident noise amplification. Additionally, the negative values in all of the deconvolved images are set to zero since the fluorescent density is always positive.The algorithm is implemented in MATLAB and run on an Intel i7 2.8 GHz CPU computer with 16 G DDR3 RAM under OS X operating system. To reconstruct an image with size of pixels and 400 measurements, this computer takes about 200 seconds. The bottleneck of the algorithm is on the pattern estimation step. The estimation of each pattern takes around 0.5 second.
3 Results
3.1 Definition of resolution
Before introducing and comparing any SIM algorithms, we want to first define the resolution criterion considered in this paper. Resolution of a microscopic image is usually defined by measuring the minimal resolvable distance between two points. Consider a widefield image with detected wavelength and numerical aperture ; the Abbe resolution criterion is then , the full width at half maximum (FWHM) of the widefield PSF. As two points get closer to each other, the contrast between them decreases. Under the separation set by Abbe’s limit, two infinitely small points observed under widefield microscope will give an overlapped twopoint image with a dip at the center with the contrast equal to 0.01. Hence, the Abbe resolution criterion can be thought of as setting the minimum acceptable contrast between two points at 0.01. We can therefore define the resolution of a microscope or a reconstruction algorithm by measuring the smallest resolvable fine features that have contrast between them of at least 0.01.
3.2 Comparison of algorithms
Given this definition of resolution, we quantify the resolution for various algorithms in Fig. 4. The Siemens star test target ( in polar coordinates) has varying spatial frequencies along the radius. The resolution of different imaging methods is quantified by reading the minimal resolved period when the contrast reaches 0.01. The effective modulation transfer function (MTF) of each method is shown in Fig. 4b, measured as the contrast of the reconstructed Siemens star image at different radii.
Our simulations use a SIM dataset with random patterns, so that we may compare against the previously proposed reconstruction algorithms of blind SIM [16] and SSOFI [20]. We create speckleilluminated images from shifted random patterns on the DMD, with shifts of FWHM of the PSF across steps in the and directions, respectively. In each pattern, only of the DMD pixels are turned on. This noisefree situation allows us to compare the ideal achieved resolution for the different algorithms.
Figure 4a shows the widefield, deconvolved widefield, confocal, and deconvolved confocal images of the Siemens star, as compared to blind SIM [16], SSOFI [20] and our algorithm. At the bottom, we show the measured effective MTF for each algorithm. In terms of visual effect, SSOFI [20] gives the least artifacts.
Widefield 

Confocal 



1.035  0.844  0.681  0.428  
Enhancement  1  1.23  1.52  2.42  
Blind SIM  SSOFI  PESIMS  PESIMSPR  

0.563  0.619  0.551  0.517  
Enhancement  1.84  1.67  1.88  2.00 
To compare resolution, we use our definition of the minimal resolved separation when the contrast drops to 0.01 and summarize the results in Table 1. The enhancement metric gives the ratio resolution improvement over widefield imaging. SSOFI resolves features down to 1.67 smaller than the widefield microscope, which is close to the claimed 1.6 in [20], and Blind SIM achieves improvement but lower contrast for highfrequencies, which is consistent with [16]. Our PESIMS and PESIMSPR (PESIMS with pixel reassignment algorithm [30, 31, 32, 34, 33] described in Appendix B) algorithms give better resolution compared to other methods. We resolve features down to and , respectively, close to the limit set by the deconvolved confocal image. Hence, our method performs the best of the blind algorithms.
Ideally, if we know all the patterns and our spatial modulation covers the full Fourier bandwidth of the objective, we could reconstruct out to in Fourier space, achieving enhancement of , as in the case of deconvovled confocal image or periodic SIM with known patterns. The blind algorithms, however, deal with an illposed problem (measure images and solve images) that can only become wellposed through appropriate constraints. If the prior for these algorithms are not accurate enough, they may solve a different problem even if the problem becomes wellposed. This is why algorithms with different prior assumptions give different resolution performance for the same dataset, as we saw in Table 1.
4 Experimental Results
Our experimental setup is shown in Fig. 1. A laser beam (Thorlabs, CPS532, 4.5 mW) is expanded to impinge onto a reflective DMD spatial light modulator (DLP®Discovery 4100, .7” XGA, 1024768 pixels, pixel size 13.6 m). The DMD generates a total of random patterns ( of DMD pixels turned on). These random illumination patterns are projected onto the object (with demagnification of 60) through a 4f system composed of a 200 mm convex lens and a objective lens with NA (Nikon CFI). The resulting fluorescent light is then collected with another 4f system formed by the same objective and a 400 mm convex lens (magnification ). A dichroic mirror blocks the reflected illumination light (as in a typical epiillumination setup). The images are taken with an sCMOS camera (PCO.edge 5.5, 25602160 pixels, pixel size 6.5 m). Patterns are shifted on a grid in the and directions with a step size of FWHM of the PSF, while collecting images at each step. Our test object is carboxylatemodified red fluorescent beads (Excitation wavelength: 580 nm/Emission wavelength: 605 nm) having mean diameter of 210 nm (F8810, Life Technologies).
Reconstruction results are shown in Fig. 5, demonstrating improved resolution using our PESIMS algorithm, as compared to standard widefield or deconvolved widefield images.
To quantitatively analyze the experimental results, we measure the resolved feature size of the reconstructed image and compare it to our theory. As shown in the cutline in Fig. 5, two fluorescent beads separated by 328 nm can clearly be resolved using our method, which are otherwise unresolvable in either widefield or deconvolved widefield images. The contrast of this twoGaussian shape shows these two Gaussian are separated by 1.16 FWHM, so the FWHM of the reconstructed beads is around 283 nm. Assuming the bead can be modeled as a Gaussian function with FWHM of 140 nm (210 nm in diameter for the beads), we can then deconvolve the bead shape out of the reconstruction and get the FWHM of the PSF for this case equal to 240 nm, which is below the diffraction limit nm.
Our algorithm can be used on other types of SIM datasets, as long as the patternpattern covariance gives a pointlike function at the center. As an example, we tested our algorithm on a dataset from a previous method, Multispot SIM (MSIM) [10]. In MSIM, the patterns are a shifting grid of diffractionlimited spots. Since the previous MSIM implementation assumes known patterns, a calibration step captured an extra dataset with a uniform fluorescence sample in order to measure the patterns directly. Our algorithm ignores this calibration data, yet accurately reconstructs both the object and patterns (see Fig. 6). The MSIM result using the calibration data is shown for comparison. The sample is microtubules stained with Alexa Fluor 488 in a fixed cell observed under a TIRF objective with . Our PESIMSPR reconstruction gives a similar result to the knownpattern MSIM reconstruction.
5 Conclusion
We have proposed a robust algorithm that can give 2 resolution improvement compared to widefield fluoresence imaging using a SIM dataset without knowing the imposed patterns. Our algorithm first estimates each illumination pattern from a lowresolution approximate object and measured intensities by solving a constrained convex optimization problem. We then synthesize a highresolution image by calculating the covariance between the estimated patterns and the measured intensity images, followed by a deconvolution and shading correction to get to the final reconstruction. We quantified the limits on resolution of our algorithm by the reconstructed contrast of a simulated Siemens star target. In simulations, we showed that our algorithm gives better resolution compared to previously proposed blind algorithms [16, 20]. Experimentally, we demonstrated this improvement experimentally on both random speckle pattern illumination and multispot scanned illumination.
Acknowledgment
The authors thank Prof. Michael Lustig, Jonathan Tamir, Michael Chen and HsiouYuan Liu for helpful discussions. This research is funded by the Gordon and Betty Moore Foundation’s DataDriven Discovery Initiative through Grant GBMF4562 to Laura Waller (UC Berkeley).
Appendix A: Reducing the number of images by multispot scanning
In this paper, we used 400 random speckle illumination patterns to reconstruct the image, far more than the 9image requirement of conventional SIM [4]. This large number of images was required for highquality reconstructions because the average and variance of the illumination patterns must be sufficiently flat in order to avoid shading variations. Recall that we want in Eq. (13) to be close to a constant, which suggests that the variance of the random patterns is constant. When the number of images goes down, this statistical assumption is not true any more. We use a shading correction algorithm (Sec. 2.3) to fix this problem by estimating the nonuniform variance, but it is still only an estimate. Hence, when the degree of variance nonuniformity increases (as the number of images decresases), the shading correction algorithm incurs errors.
Figure 7 shows simulations demonstrating the effect of reducing the number of images. We use the same random pattern as in Sec. 2.3 and shift by step sizes of 0.6 FWHM of the PSF. As we decrease the number of images from 400 to 36, the reconstruction becomes worse, due to shading errors. The shading map, , is shown in the bottom row of Fig. 7. We can see the artifacts happen at the region where the is dim and changing. Without knowing the patterns a priori it is not possible to fully correct these shading effects.
Since we know that the artifacts that appear with too few images are due to a nonuniform , we can attempt to design patterns that will be uniform with a minimal number of images. We would like to give a uniform map. Consider the contribution from a single pattern; is similar to the original pattern but with sharper bright spots. The ensemble average over sums up all these bright spots after shifting the pattern around. For a shifted random pattern, we must capture many images in order for the summation of the bright spots to give a uniform map. One efficient way to get a sum of bright spots to become a uniform map is to use a periodic multispot pattern (see Fig. 7) [10, 11, 13]. The period of this multispot pattern is designed to be 6 shifting step sizes. Thus, we can use scanning steps to give a uniform shading map . The reconstruction is also shown in Fig. 7 to be almost as good as the one illuminated with shifted random patterns.
Experimentally, we see similar trends in image reconstruction quality for different illumination strategies (see the bottom row of Fig. 7). Results from random pattern illumination of fluorescent beads with and multispot illumination with give very similar results, and shading artifacts become prominent as the number of patterns is reduced. Note that we use the same algorithm for both the random and multispot illuminated datasets because the PSFs of the patternintensity covariance images for both cases are the same.
To show that the PSF for the patternintensity covariance image with random and multispot illumination are the same, we must derive the patternpattern covariance as we did in Part 2b in Sec. 2.3. To calculate the patternpattern covariance, we need to calculate the covariance of the patterns on the DMD and plug it into Eq. (9) to get patternpattern covariance . For the multispot case, we can express the pattern on the DMD and its zeromean pattern as
(15) 
where , and are integers, and is the period of the pattern. Then, we can calculate the covariance of the pattern on the DMD as
(16) 
where is a constant that enforces unit consistency. Plugging this into Eq. (9), we can then calculate the patternpattern covariance as
(17) 
Although the patternpattern covariance is only a replica of the , the PSF of the covariance image, , only depends on the multiplication of and as Eq. (7) derived. If the period of the multispot pattern is large compared to , we can still have our PSF as , which is the same as the case of random pattern illumination.
Appendix B: Enhanced SNR via pixel reassignment
In this section, we first discuss the similarity between SIMS and confocal microscopy. This leads to an extension of our method that incorporates the pixel reassignment procedure proposed in [30, 31, 32, 34, 33]. In computing the covariance of the shifted pattern and the intensity , there is still some information of the object leftover. Pixel reassignment helps incorporate it in a straightforward fashion, giving better SNR in the final reconstruction.
In Sec. 2.3 of our SIMS procedure, we first calculate the covariance image . The PSF of this covariance image is determined by imposing our statistical prior on the patternpattern covariance . The effect is similar to the illumination PSF of confocal microscopy [9]. Looking at Eq. (11), our covariance image with PSF of is the same as a confocal image taken with illumination PSF, , and detection PSF, .
From the same SIM dataset, we can further use the shifted patterns and correlate them with the intensity to compute a series of shifted covariance images
(18) 
The PSF of the shifted covariance image is the product of and , whose center is approximately at . This image is the same as the image taken under a confocal microscope with a shifted pinhole. This implies by shifting around the patterns and correlating with the intensity, we get the equivalent of many 2D confocal images taken with the pinhole at different positions. This is the same dataset as would be described in the imaging scanning microscope, where the singlepixel camera and pinhole is replaced with a CCD in the confocal system [32, 33]. Though these images are not centered, they still contain the information of the same object. Pixel reassignment was proposed in [31, 32, 34, 33] as a way to incorporate this 4D information to get a 2D image with better SNR.
Since the 2D images from shifted patterns are approximately shifted versions of the one at , we can shift the information back to the center region and sum up all these images to enhance the SNR and form a pixelreassigned (PR) image as
(19) 
This synthesized image using pixel reassignment gives a PSF of . Figure 8(a) shows the comparison between the SIMS PSF, , and the PSF of SIMS with pixel reassignment, both in the real space and the Fourier space (assuming ). In the real space, the PSF after doing pixel reassignment looks fatter than the one without pixel reassignment. However, the OTF of the one with pixel reassignment has larger value in the highfrequency region, where the noise severely degrade the image resolution. Thus, we get better SNR by summing up all the information we have and have a OTF that better deals with noise at highfrequency region. Since we know the PSF, , and the shading map, , of this pixelreassigned image , we can again apply the deconvolution and the shading correction operation described in Sec. 2.3 to get a PESIMSPR reconstruction.
Figure 8(b) compares the reconstruction result of fluorescent beads using multispot illumination with and without applying pixel reassignment algorithm. Pixel reassignment results in sharper contrast when two beads are close to each other and helps clean up some background deconvolution errors. A cutline plot of the fluorescent beads in Fig. 8(b) shows that the FWHM of the reconstructed bead from SIMS (300.3 nm) is larger than for SIMSPR with pixel reassignment (254 nm), giving better resolution.
References
 [1] W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57, 932–941 (1967).
 [2] M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22, 1905–1907 (1997).
 [3] R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999).
 [4] M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” Journal of Microscopy 198, 82–87 (2000).
 [5] E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. LippincottSchwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006).
 [6] M. J. Rust, M. Bates, and X. Zhuang, “Subdiffractionlimit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793–795 (2006).
 [7] S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulatedemissiondepletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994).
 [8] T. Wilson and C. J. R. Sheppard, “Theory and practice of scanning optical microscopy,” Academic Press (1984).
 [9] C. J. R. Sheppard, “Scanning confocal microscopy,” Encyclopedia of Opt. Engineering, 2525–2544 (2003).
 [10] A. G. York, S. H. Parekh, D. D. Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, “Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,” Nat. Methods 9, 749–754 (2012).
 [11] A. G. York, P. Chandris, D. D. Nogare, J. Head, P. Wawrzusin, R. S. Fischer, A. Chitnis, and H. Shroff, “Instant superresolution imaging in live cells and embryos via analog image processing,” Nat. Methods 10, 1122–1126 (2013).
 [12] F. Ströhl and C. F. Kaminski, “A joint RichardsonLucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data,” Methods Appl. Fluoresc. 3, 014002 (2015).
 [13] N. Chakrova, R. Heintzmann, B. Rieger, and S. Stallinga, “Studying different illumination patterns for resolution improvement in fluorescence microscopy,” Opt. Express 23, 31367–31383 (2015).
 [14] J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13, 6075–6078 (2005).
 [15] D. Sylman, V. Micó, J. Garcá, and Z. Zalevsky, “Random angular coding for superresolved imaging,” Appl. Opt. 49, 4874–4882 (2010).
 [16] E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. L. Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photon. 6, 312–315 (2012).
 [17] J. Min, J. Jang, D. Keum, S.W. Ryu, C. Choi, K.H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Scientific Reports 3, 2075:1–6 (2013).
 [18] S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “Highresolution fluorescence imaging via patternilluminated Fourier ptychography,” Opt. Express 22, 20856–20870 (2014).
 [19] H. Yilmaz, E. G. V. Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of widefield fluorescence imaging,” Optica 2, 424–429 (2015).
 [20] M. Kim, C. Park, C. Rodriguez, Y. Park, and Y.H. Cho, “Superresolution imaging with optical fluctuation using speckle patterns illumination,” Scientific Reports 5, 16525 (2015).
 [21] A. Negash, S. Labouesse, N. Sandeau, M. Allain, H. Giovannini, J. Idier, R. Heintzmann, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Improving the axial and lateral resolution of threedimensional fluorescence microscopy using random speckle illuminations,” J. Opt. Soc. Am. A 33, 1089–1094 (2016).
 [22] S. Labouesse, M. Allain, J. Idier, S. Bourguignon, A. Negash, P. Liu, and A. Sentenac, “Joint reconstruction strategy for structured illumination microscopy with unknown illuminations,” ArXiv: 1607.01980 (2016).
 [23] S. A. Shroff, J. R. Fienup, , and D. R. Williams, “Phaseshift estimation in sinusoidally illuminated images for lateral superresolution,” JOSA A 26, 413–424 (2009).
 [24] S. A. Shroff, J. R. Fienup, and D. R. Williams, “Lateral superresolution using a posteriori phase shift estimation for a moving object: experimental results,” J. Opt. Soc. Am. A 27, 1770–1782 (2010).
 [25] K. Wicker, O. Mandula, G. Best, R. Fiolka, and R. Heintzmann, “Phase optimisation for structured illumination microscopy,” Opt. Express 21, 2032–2049 (2013).
 [26] K. Wicker, “Noniterative determination of pattern phase in structured illumination microscopy using autocorrelations in Fourier space,” Opt. Express 21, 24692–24701 (2013).
 [27] R. Ayuk, H. Giovannini, A. Jost, E. Mudry, J. Girard, T. Mangeat, N. Sandeau, R. Heintzmann, K. Wicker, K. Belkebir, and A. Sentenac, “Structured illumination fluorescence microscopy with distorted excitations using a filtered blindSIM algorithm,” Opt. Lett. 38, 4723–4726 (2013).
 [28] A. Jost, E. Tolstik, P. Feldmann, K. Wicker, A. Sentenac, and R. Heintzmann, “Optical sectioning and high resolution in singleslice structured illumination microscopy by thick slice blindSIM reconstruction,” PLoS ONE 10, e0132174 (2015).
 [29] T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, backgroundfree, 3D superresolution optical fluctuation imaging (SOFI),” PNAS 106, 22287–22292 (2009).
 [30] I. J. Cox, C. J. R. Sheppard, and T. Wilson, “Improvement in resolution by nearly confocal microscopy,” Appl. Opt. 21, 778–781 (1982).
 [31] C. J. R. Sheppard, “Superresolution in confocal imaging,” Optik 80, 53–54 (1988).
 [32] C. B. Müller and J. Enderlein, “Image scanning microscopy,” Phys. Rev. Lett. 104, 198101–1–198101–4 (2010).
 [33] C. J. R. Sheppard, S. B. Mehta, , and R. Heintzmann, “Superresolution by image scanning microscopy using pixel reassignment,” Opt. Lett. 38, 2889–2992 (2013).
 [34] S. Roth, C. J. R. Sheppard, K. Wicker, and R. Heintzmann, “Optical photon reassignment microscopy (OPRA),” Optical Nanoscopy 2:5, 1–6.
 [35] N. Parikh and S. Boyd, “Proximal algorithms,” Found. Trends Optim. 1, 123–231 (2013).
 [36] Y. Nesterov, “A method for solving the convex programming problem with convergence rate ,” Dokl. Akad. Nauk SSSR 269, 543–547 (1983).
 [37] A. Beck and M. Teboulle, “A fast iterative shrinkagethresholding algorithm for linear inverse problems,” SIAM J. Imaging Science 2, 183–202 (2009).
 [38] T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “Highspeed 1frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41, 4704–4708 (1996).
 [39] J. G. Walker, “Nonscanning confocal fluorescence microscopy using speckle illumination,” Opt. Commun. 189, 221–226 (2001).
 [40] S.H. Jiang and J. G. Walker, “Experimental confirmation of nonscanning fluorescence confocal microscopy using speckle illumination,” Opt. Commun. 238, 1–12 (2004).
 [41] R. Heintzmann and P. A. Benedetti, “Highresolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45, 5037–5045 (2006).