Imaging with SPADs and DMDs: Seeing through Diffraction-Photons

by   Ibrahim Alsolami, et al.

This paper addresses the problem of imaging in the presence of diffraction-photons. Diffraction-photons arise from the low contrast-ratio of DMDs (∼ 1000: 1), and very much degrade the quality of images captured by SPAD-based systems. Herein, a joint illumination/deconvolution scheme is designed to overcome diffraction-photons, enabling the acquisition of intensity and depth images. Additionally, a proof-of-concept experiment is conducted to demonstrate the viability of the designed scheme. It is shown that by co-designing the illumination and deconvolution phases of imaging, one can substantially overcome diffraction-photons.



There are no comments yet.


page 9

page 10


Joint Blind Deconvolution and Robust Principal Component Analysis for Blood Flow Estimation in Medical Ultrasound Imaging

This paper addresses the problem of high-resolution Doppler blood flow e...

Design of optimal illumination patterns in single-pixel imaging using image dictionaries

Single-pixel imaging (SPI) has a major drawback that the huge number of ...

Compressive Deconvolution in Medical Ultrasound Imaging

The interest of compressive sampling in ultrasound imaging has been rece...

Coded Illumination for Improved Lensless Imaging

Mask-based lensless cameras can be flat, thin and light-weight, which ma...

Deep Polarization Imaging for 3D shape and SVBRDF Acquisition

We present a novel method for efficient acquisition of shape and spatial...

Deep Learning Improves Contrast in Low-Fluence Photoacoustic Imaging

Low fluence illumination sources can facilitate clinical transition of p...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Today, imaging with single-photon avalanche diodes (SPADs) is rapidly gaining attention. The high sensitivity and temporal resolution of SPADs, is finding favor in a variety of applications ranging from light-in-flight recording [1] to imaging around corners [2].

Traditionally, photon detection has relied on photomultiplier tubes (PMTs). PMTs have a fast response-time and provide a high amplification gain, yet they are fairly large vacuum tubes and thus not well-suited for cameras.

Nowadays, SPADs are increasingly becoming the solid-state option for photon detection. Low dark counts ( Hz) and high quantum efficiencies (up to [3], are among the attractive features of SPADs.

SPAD cameras are, however, still in their infancy. Currently, commercially available SPAD cameras have a low pixel resolutions ( pixels). To increase the spatial resolution of single-pixel SPAD cameras, raster-scanning is typically used, whereby pixel values of a scene are sequentially acquired.

Raster-scanning is commonly achieved by means of galvanometric mirrors. By which light is steered to positions of a scene mechanically via a pair of mirrors. Such mechanical systems, however, are cumbersome.

In contrast to single-point illumination in galvanometric systems, digital micro-mirror devices (DMDs) can project optical patterns. A DMD is a spatial light modulator (SLM), comprising a matrix of micro-mirrors integrated with a semiconductor chip. When light shines on a DMD, high-resolution ( pixels) [4] optical patterns are projected; in our experiment, we use these optical patterns to illuminate a scene.

A challenging drawback of DMD-based projection systems, compared with their galvanometric counterpart, is the presence of scattered light. Due to scattered light, scene locations that would ideally be masked by DMD mirrors in the off-state, are illuminated; this imperfection hinders the collection of quality images.

The degree to which a DMD system rejects light is quantified by the contrast-ratio111The full-on/full-off contrast-ratio is defined as: , where is the measured luminance when all DMD mirrors are in the on-state, resulting in a fully illuminated screen, whereas is the measured luminance when all mirrors are in the off-state.. The higher the contrast-ratio, the lower the impact of scattered light on image quality. We shall refer to photons originating from a DMD’s scattered light as diffraction-photons, Fig. 1.

Further challenges: increasing the illumination power may not always be a feasible option to enhance image quality. For instance, when imaging sensitive biological samples [5]. Moreover, it is desirable to operate an imaging system in a gate-less manner, so as to reduce the number of measurements.

This work aims to overcome diffraction-photons and provide an alternative to raster-scanning. Using a SPAD, DMD and a co-designed illumination/deconvolution scheme, we experimentally demonstrate a system capable of acquiring intensity and depth images in an environment tainted by diffraction-photons.

Our contributions are the following:

  • Design: we design a scheme, tailored for SPAD/DMD-based imaging systems, for capturing intensity and depth images. (Section IV and V)

  • Theory: we formulate image-formation models for our proposed scheme and describe techniques to recover images from them. (Section V)

  • Experiment: we experimentally demonstrate the viability of our imaging system. (Section VII)

The idea of this paper can be summarize as follows:

Project overlapping illumination blocks to overcome diffraction-photons, and let deconvolution algorithms untangle pixel values.

The remainder of this paper is organized into seven sections: In Section II, we introduce some preliminaries. In Section III, we review prior work. In Section IV, we describe a method for scene illumination, followed by our image-formation models in Section V. In Section VI, we describe details of our experimental setup. In Section VII, we present and discuss our experimental results. In Section VIII, we draw conclusions from our findings and outline future research directions.

Fig. 1: Simplified sketch of diffraction-photons emanating from a DMD.

Ii Preliminaries

In this section, we introduce some principles to serve as a starting point for subsequent sections. This section is in part based on [6].

Each pixel, , of the scene has a depth and reflectivity , and is illuminated times by a laser pulse with a waveform . The rate of photons impinging on a SPAD is given by:


where is the speed of light and is the average photon rate due to ambient light.

The quantum efficiency of the SPAD lowers the photon rate by a factor of , and dark counts, , of the SPAD are added:


A time-correlated single photon counting (TCSPC) system quantizes the observation interval, , into time-bins of duration . Accordingly, the average number of photon counts in a time-bin (per illumination) is:


where, is the index of a time-bin, .

After illuminations, the number of photons, , detected in a time-bin

draws values from a Poisson distribution:


A measure of signal fidelity is the signal-to-background ratio (SBR) and is defined as: SBR, where and are the mean number of signal and noise photons, respectively. Moreover, the time a SPAD needs to reset to its initial state after an avalanche event occurs is called deadtime. During this period of time, arriving photons will go undetected.

Iii Prior Work

Based on the method of acquisition employed, the literature can be categorized into the following classes:

Raster-Scanning:   In raster-scanning pixel values are sequentially acquired. Typically, a pair of galvanometric mirrors or a motorized translation stage is used to obtain a scene’s spatial content [7, 8, 9, 10].

Our imaging system and systems in this category require the same number of measurements to reconstruct an intensity and depth image. The main issues, however, encountered when using mechanical scanning systems such as galvanometric mirrors or motorized translation stages is that they are substantially large, require high-power levels, and in the case of galvanometric mirrors, suffer from geometric distortion [11]; in our experiment, we employ a DMD lest incurring such practical issues.

Time-Gated Compressive Sensing:  In [12, 13, 14], 3D imaging systems based on compressive sensing were demonstrated. Using range gating, time-slices of a scene are obtained, from which depth images are reconstructed.

While such systems can be used to reduces the impact of diffraction-photons, a full time sweep is required to construct a 3D image, as gating is needed to distinguish objects that fall within the same range interval.

What sets our imaging system apart from others in this category is that it is gate-less, requiring less measurements to reconstruct a 3D image: The number of measurements in our imaging system is equal to the number of pixels, , as opposed to for gated compressed sensing, where is a selected constant for a given image sparsity/compressibility (typically in the range between 2 and 4) and is the number of time-bins ().

Gate-Less Compressive Sensing: Solutions to circumvent the need of time-gating have been proposed. In [15], an imaging system able to construct a

depth image in a gate-less manner is demonstrated. Prior to recovering a scene’s spatial information, parametric deconvolution is first used to estimate the set of depth present in scene. Once the set of depths is determined, a compressive sensing approach is used to recover a scene’s spatial content.

A major obstacle to adopting the aforementioned system is the presence of deadtime-interference. To give an example, consider the following: an object in the background of a scene may not be visible as photons reflected from it may go undetected due to the deadtime initiated by preceding photons reflected from foreground objects. A way to resolve this obstacle is to use a photo diode operating in the linear-mode (analogue) [16], as opposed to Geiger-mode; this, however, will entail a loss in photon detection sensitivity.

In our experiment we employ a SPAD (Geiger-mode), enabling single-photon detection sensitivity. We also use block illumination (Section IV), which ameliorates deadtime-interference, allowing our imaging system to operate in gate-less manner.

Epipolar Imaging: A relatively new method of image acquisition is epipolar imaging [17, 18, 19]. In epipolar imaging, an illumination sheet sweeps through a scene, and an array of pixels on an epipolar plane of illumination detect incoming light. This method of imaging can potentially overcome diffraction-photons as it limits indirect light. To realize such system, however, some hardware is need to select an epipolar plane, such as controllable mirrors. Our imaging system is free of said hardware, rendering it simpler to operate.

Iv Illumination

Trading-off spatial resolution in return for a higher signal power can be made. At low illumination levels, raster-scanning (Fig. 1(a)) can yield inaccurate estimations of intensity and depth images due to the presence of diffraction-photons and low SBR. To improve signal power, one can scan large patches of the scene; this, however, comes at the expense of a loss in spatial resolution (Fig. 1(b)).

A simple yet effective approach to boost signal power while maintaining a given spatial resolution, is to use an illumination window (for example, of size ) that scans the scene in steps of one pixel (Fig. 1(c) provides an illustration). This approach improves signal power as more photons are collected, and simultaneously retains the native resolution. Additionally, deadtime-interference is alleviated as the optical power is concentrated within an illumination block, where depth values of a scene are notably correlated.

A by-product, however, of using overlapping scanning blocks (Fig. 1(c)) is a blur artifact as photons from adjacent pixels are collected. In the Section V, we overcome this challenge via a deconvolution technique designed for the problem at hand.

A key advantage of the proposed scheme, is that it provides (via optimization) a means to relax the trade-off between spatial resolution and signal power.

Fig. 2: (a) Raster-scanning in a pixel-by-pixel manner. (b) Raster-scanning large patches of the scene. (c) Overlap scanning. In this illustration, an illumination window of size scans the scene in steps of one pixel.

V Image Formation

Using the illumination scheme (Fig. 1(c)) presented in Section IV, we now describe approaches to recover intensity and depth images from photon-arrival events. We first formulate the image reconstruction process as a discrete inverse problem and then solve it by a convex optimization algorithm.

V-a Intensity Image

The goal here is to reconstruct an intensity image, , with the aid of the proposed illumination scheme (see Fig. 1(c)). The input/output relationship, in the absence of noise, can be described as follows:


here denotes spatial convolution, is the point-spread function that describes the proposed illumination scheme (Fig. 1(c)), is the latent intensity image, and is the observed number of photon-counts at pixel .


denote the vectorized representations of image

— column-wise stacked. Likewise, let denote the observation vector, where is the total number of photon-counts at the pixel.

The two-dimensional convolution in Eq. 5 can be expressed as matrix-vector multiplication:


here, is the DMD reflection coefficient, which accounts for the contrast-ratio. When a pixel of the scene is within an illumination block (see Fig. 1(c)), takes on a value of one—full reflection. Other pixels of the scene are, however, moderately illuminated due to the low contrast-ratio of the DMD: when a DMD mirror is in the “off” state, a fraction of the optical power illuminates pixels of the scene; we denote this fraction by . More formally:

here, denotes the number of rows of the image, and is the length of an illumination window of size (Fig. 1(c) provides an illustration). Additionally, let “” denote the smallest positive integer “” such that .

The following four steps are preformed to recover (Eq. 6):

  1. Variance-stabilizing transformation:

    in Eq. 6 is a Poisson distributed random vector; to use an -norm apt for Gaussian noise, we apply an Anscombe transform to convert the signal-dependent Poisson noise of to (approximately) additive Gaussian noise with a constant variance [20]:


    The Anscombe transformation, however, breaks the commonly used linear model Eq. 6—as is non-linear operator. We can circumvent this barrier by: first denoising a blurred image (Eq. 9), applying an inverse transform (Eq. 10), and then recovering the latent image (Eq. 11).

  2. Denoising:

    The aim here is to denoise a blurred image prior to deconvolving it:


    here , and is a denoised yet blurred image. A constraint here is added as the minimum value of Eq. 8 is , which occurs when .

  3. Inverse transformation:

    Algebraic inversion of Eq. 8 produces a bias; we therefore use a maximum likelihood (ML) transformation [21]:


    here, can be thought of as the denoised version of (Eq. 6). In this step a lookup table is used for mapping values of to .

  4. Deconvolution:

    We now wish to solve the following system of equations . Matrix is ill-conditioned. As such, we consider a regularized least-squares approach to recover our latent image:


We solve 9 and 11, by the alternating direction method of multipliers (ADMM) algorithm. Details of the ADMM algorithm is provided in the Appendix.

Limitation: A limitation of the procedure described (step 1–4) is that a low or high value of in Eq. 9 may diminish the quality of the recovered image in step 4 (Eq. 11), and thus one must carefully select . In our experimental results (Section VII), the value of is chosen by inspection: the value of that produces the best result is selected.

V-B Depth Image

The aim here is to obtain a depth image, , from photon-arrival events. In the absence of noise, the input/output relationship can be expressed as follows:


here, two convolution processes take place:

  1. [label=()]

  2. is a 2D spatial convolution over , describing the convolution of the scene with the proposed overlapping illumination blocks, (Fig. 1(c) provides an example).

  3. is a 1D temporal convolution over , describing the convolution of the scene with the waveform of the illumination pulse, .

Each pixel, , of the scene has a depth, , value and a corresponding time-of-flight (ToF), , which can be represented by the following impulse response:


where, is a Dirac function (Fig. 2(a)) and is the speed of light. Likewise, in the continuous spatial and temporal domain, function in Eq. 12, is zero except at the ToF. Additionally, let denote the number of photons at a given space-time point .

Fig. 3: (a) Convolution of illumination waveform with a depth point of a scene. (b) An illustrative example of the formation of and . Matrix is the result of convolving the depth at each pixel with the illumination waveform. Each row of is for a given pixel, and vector is the column of .

Discretization: The continuous volume containing the scene is discretized into a voxel grid: The DMD discretizes the scene spatially into pixels, and the TCSPC module discretizes the scene temporally into time-bins. In this voxel grid, a discrete depth image is represented by a vector .

To keep the notation light, the discrete version of in this voxel domain is denoted by —akin to Eq. 13 but now for a complete image—and is given by:


here, is the time-bin, is the speed of light, is the depth at pixel , is the number of time-bins, is the total number of pixels, and is a Kronecker delta function defined as:

Each column of matrix in Eq. 14 represents a time-slice of the voxel grid at a particular time-bin, , and each row is allotted to a single pixel of the scene. Additionally, matrix is a binary matrix, and the summation of entries along each row of has a value of one—as each pixel can only have a single depth.

The convolution operators in Eq. 12 can be represented as a matrix multiplication:




Here each row of matrix is a time-histogram of the illumination waveform, and is a circular shift of its preceding row. When a row of is multiplied by matrix , the illumination waveform is shifted to the ToF (a detailed example is provide in Fig. 2(a))

In Eq. 15, matrix is the observed time-histogram at the detector (Fig. 4 provides an illustration). Each row of is a time-histogram, resulting from convolving the scene with both an illumination window and pulse waveform:


Optimization: The inverse problem of recovering (Eq. 15) can be formulated by the following optimization problem:


where is a regularizer weighted by . Eq. 18

is a combinatorial optimization problem, and challenging to solve within a reasonable time (the search space is


A tractable method to infer depth from Eq. 18 can be attained by the following three steps:

  1. Spatial deconvolution:

    In Eq. 18, let , and the column of and be denoted as and , respectively. Figs. 3 and  4, provide illustrations. Both and , are -dimensional vectors: and , where .

    The optimization problem for each time-slice (Fig. 2(b)) of the voxel grid is:


    solving this optimization problem for each yields matrix . We solve this convex optimization problem by an ADMM algorithm. Details of the ADMM algorithm is provided in the Appendix.

    Fig. 4: An illustrative example of the formation of and . Matrix is the observation matrix, and is the result of convolving the scene with: 1) the proposed overlapping illumination blocks 2) the illumination waveform. Each row of is the observed time-histogram at a given measurement. Vector is the column of , and represents a time-slice of the observed histograms at time-bin .
  2. Intermediate filtering:

    Prior to temporally deconvolving our image, we apply an -order median filter. This filter has shown through experimentation to significantly improve our results.

    Let denote the row of matrix . The -order median filter of is:


    from which we obtain .

  3. Temporal deconvolution:

    Using from the previous step, we can now determine directly in a per-pixel manner. Vector contains the received signal for pixel , which can be thought of as a delayed and scaled version of the transmitted pulse. The ToF (or signal delay) for each pixel is independently obtained by cross-correlating with the waveform of the transmitted pulse, , and finding the maximum as follows:


    for simplicity of notation, denotes the element of vector . Using Eq. 21, the depth at pixel, , is , from which we obtain .

Vi Hardware and Experimental Setup

Fig. 5 shows a schematic of the experimental setup. The illumination source is a laser diode (LDH-P-C-650, PicoQuant) with a central wavelength of nm. The laser is controlled by a laser-driver (PDL 828 Sepia II, PicoQuant), and emits pulses with a duration of ps (FWHM) and repetition rate of MHz.

An optical beam expander enlarges the laser beam by a factor of . The DMD (in a DLP 4500 projector, Texas Instruments) spatially modulates the incoming light and projects it towards the scene. The contrast-ratio of the DMD is .

Photons reflected by surfaces of the scene are detected by a SPAD (PDM m Series, Micro Photon Devices) with a ps (FWHM) temporal resolution. The SPAD has a quantum efficiency, ns deadtime, active area diameter of m, and dark counts rate of Hz. Photons detected by the SPAD are time-stamped, relative to a sent laser pulse, with a resolution of ps by a TCSPC module (PicoHarp 300, PicoQuant).

The scene is spatially sampled by the DMD into pixels, forming an image of size rows by columns. The observation interval is divided by the TCSPC module into time-bins, each with a duration of ps. A total of laser pulses are transmitted for each measurement. The integration time for each measurement is ms.

The scene consists of a ball ( cm, radius) placed in front of a screen. The ball is cm away from the SPAD, and the horizontal distance between the SPAD and the background screen cm. Table I provides a summary of the experimental parameters.

Fig. 5: Simplified schematic of experimental setup.
Parameter Value

Average dark-count rate of SPAD  Hz
Center wavelength of laser nm
Contrast-ratio of DMD
Deadtimeof SPAD  ns
Diameter of active area of SPAD m
Integration time per measurement ms
Laser pulse duration ps
Number of transmitted laser pulses per measurement
Number of time-bins
Quantum efficiency of SPAD
Repetition rate of laser pulses MHz
Timing resolution of SPAD ps
Timing resolution of TCSPC module ps

Non-extensible deadtime [22]. Full width at half maximum.
TABLE I: Experimental parameters

Vii Results and Discussion

In this section we present and discuss results of a proof-of-concept experiment. The goal is to demonstrate the ability of the designed illumination/deconvolution scheme (Section IV and V) to overcome diffraction-photons.

Fig. 6 displays histograms of our experimental data, and the statistics of our measurements are as follows: the average number of noise (ambient + dark) photons was measured and found to be photons/pixel. The average number diffraction-photons with noise was measured and found to be photons/pixel. For consistency, we have taken samples for all measurements presented in this section—this value is equal to the number of pixels of the sought image.

The average number of photons (signal + noise + diffraction) per pixel for raster-scanning, and illumination blocks of size , and , were measured and found to be: , , and photons/pixel, respectively.

Fig. 6: Histograms of experimental data. Supplementary details: for raster-scanning and the proposed scheme, the measured photons/pixel comprise signal, noise and diffraction photons.

Figs. 6(b) and 7(a), display data gathered from the experiment for the observation vector (Eq. 6) and matrix (Eq. 17), respectively. And Figs. 6(d) and 7(b), presents the performance of the proposed scheme.

A rich body of research exists on 3D structure and reflectivity imaging with photon-counting detectors. However, this is the first paper to address the issue of diffraction-photons in such systems. We therefore believe that, in addition to presenting the performance of our scheme, it would be beneficial to compare it with commonly used benchmarks ([6] and [7]).

In Figs. 7 and 8, it can be seen that the proposed scheme reveals a latent image amidst a bath of diffraction-photons ( photons/pixel, Fig. 6). Methods based traditional raster-scanning [7, 6], however, are adversely affected by diffraction-photons. The reason for this is that the average number of signal photons per pixel for raster-scanning is greatly below the level of diffraction-photons; this results in a loss of image quality.

The proposed scheme has an improved performance as it takes co-designed illumination and deconvolution approach to solve the image retrieval problem, which boosts the average number of signal photons collected per pixel (Fig. 

6) and maintains the native image resolution as the relatively large illumination blocks overlap (Fig. 2). For our experiment, an illumination window size of produced the best results: this size is a workable balance between signal photons collected and blur introduced.

It is important to point out, however, that the performance improvement of the proposed scheme over [6] and [7] should be viewed with some caution because [6] and [7] are designed for imaging environments without diffraction-photons. As such, they produce a lower image quality than our scheme.

(a) Scene
(b) Experimental data. A display of observation vectors , Eq. 6.

Supplementary details: for a consistent comparison, images shown here have the same photon-count to grayscale map (i.e., a black pixel represents the minimum photon-count among all the four images. Similarly, a white pixel represents the maximum photon-count among all the four images. Photon-counts between the maximum and minimum are mapped to a gray color). For the proposed scheme, the illumination window size () is varied from to . All images are gamma-corrected, .
(c) Experimental data. Images of Fig. 6(b) with an independent grayscale map.

Supplementary details: to enhance, to some extent, the visibility of the images shown in Fig. 6(b), each image here has an independent grayscale map (i.e., the minimum and maximum photon-counts of each individual image is mapped to a black and white pixel, respectively. Photon-counts between the maximum and minimum are mapped to a gray color). All images are gamma-corrected, .
(d) The end-results. Denoised/deconvolved images.

Supplementary details: the input data for both Ref. [6] and [7] is obtained from the image Raster-Scanning displayed in Fig. 6(c). For Ref. [7], the input data is the number of transmitted pulses until the first photon arrives. For Ref. [6], the input noisy image is the intensity image labeled Raster-Scanning in Fig. 6(c). For the proposed scheme, the input noisy/blurred images for , , and , are their corresponding images shown in Fig. 6(c). All images here have an independent grayscale map and are gamma-corrected, .
Fig. 7: Intensity images.
First photon of raster-scanning.
Ref. [7]
All photons of raster-scanning.
Ref. [6]
Fig. 8: (a) Raw data: a visualization of experimental data (first photon of raster-scanning and matrix , Eq. 17). (b) Depth images. Supplementary details: each column of is reshaped to form a image, so as to visualize in a 3D space (). In the experiment, we vary the illumination window size from to for the proposed scheme. The opacity of points in the first figure of Fig. 7(a) are considerably higher than that of figures below it, so as to enhance visibility.

Viii Conclusion

This study set out to overcome diffraction-photons. We have describe a method for acquiring intensity and depth images in an environment tainted by diffraction-photons ( photons/pixel).

A proof-of-concept experiment demonstrates the viability of the designed scheme. It is shown that at a low DMD contrast-ratio (), diffraction-photons can be overcome via a joint illumination and deconvolution scheme, enabling acquisition of intensity and depth images.

The scheme works as the number of signal photons collected is boosted by overlapping sizeable illumination blocks. The overlapping blocks mix pixel values, which are subsequently untangled by deconvolution algorithms.

The central conclusion that can be drawn from this study is that the designed scheme offers a means to relax the trade-off between spatial resolution and signal power—this is mainly achieved by convex optimization techniques.

A promising avenue for future research is to determine the optimal illumination block size mathematically, for a given DMD contrast-ratio and noise level. So far, the optimal block size has been determined by experimental testing.

To conclude, we believe that the findings in this paper will be of special interest to researchers in the field of ToF imaging as it addresses a new practical challenge.

Appendix A alternating direction method of multipliers

In this section we shall denote the shrinkage operator (element-wise soft thresholding) by:

Additionally, we denote the projection of onto set and by and , respectively.

Initialize , , , , set and
choose , ,
1 repeat
         //dual updates
until stopping criteria is satisfied.
Algorithm 1 ADMM algorithm for minimizing Eq. 9
Initialize , , , , set and
choose , ,
1 repeat
         //dual updates
until stopping criteria is satisfied.
Algorithm 2 ADMM algorithm for minimizing Eq. 11
Initialize , , , , set and
choose , ,
1 repeat
         //dual updates
until stopping criteria is satisfied.
Algorithm 3 ADMM algorithm for minimizing Eq. 19

Appendix B Derivative Matrices

In this section we describe the derivative matrices used throughout the paper. Let be defined as follows:

where and are the first and second order derivative matrices, respectively; and constant controls the strength of the second derivative. Here, the first and second derivative matrices are defined as:


where are convolution matrices for the first and second derivatives respectively, and their subscripts specify the direction in which a derivative operation is performed.

The rationale of including a second derivative in our regularizer is that it encourages the recovery of image curvatures, rendering deblurred images more naturally-looking.


The authors wish to express their gratitude to Congli Wang, Muxingzi Li and Qilin Sun for their technical assistance and fruitful discussions. This research was made possible by KAUST baseline funding.


  • [1] G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, “Single-Photon Sensitive Light-in-F[l]ight Imaging,” Nature communications, vol. 6, p. 6021, 2015.
  • [2] M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal Non-Line-of-Sight Imaging Based on the Light-Cone Transform,” Nature, 2018.
  • [3] PicoQuant, “PDM Series - Single Photon Avalanche Diodes,” Datasheet, September 2015.
  • [4] Texas Instruments, “DLP4500 (0.45 WXGA DMD),” DLPS028C datasheet, April 2013 [Revised Feb. 2018].
  • [5] P. Hinterdorfer and A. Van Oijen, Handbook of Single-Molecule Biophysics.   Springer Science & Business Media, 2009, pp. 73.
  • [6] D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-Efficient Computational 3-D and Reflectivity Imaging with Single-Photon Detectors,” IEEE Transactions on Computational Imaging, vol. 1, no. 2, pp. 112–125, 2015.
  • [7] A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-Photon Imaging,” Science, vol. 343, no. 6166, pp. 58–61, 2014.
  • [8] M. O’Toole, F. Heide, D. B. Lindell, K. Zang, S. Diamond, and G. Wetzstein, “Reconstructing Transient Images from Single-Photon Sensors,” in Proc. IEEE CVPR, 2017, pp. 2289–2297.
  • [9] W. He, Z. Feng, J. Lin, S. Shen, Q. Chen, G. Gu, B. Zhou, and P. Zhang, “Adaptive Depth Imaging with Single-Photon Detectors,” IEEE Photonics Journal, vol. 9, no. 2, pp. 1–12, 2017.
  • [10] D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, “Photon-Efficient Imaging with a Single-Photon Camera,” Nature communications, vol. 7, p. 12046, 2016.
  • [11] Thorlabs, Inc, “GVS011 and GVS012 Large Mirror Diameter Scanning Galvo Systems,” User Guide, HA0220T [Revised 7 July 2011].
  • [12] G. A. Howland, P. Zerom, R. W. Boyd, and J. C. Howell, “Compressive Sensing LIDAR for 3D Imaging,” in Lasers and Electro-Optics (CLEO), 2011 Conference on.   IEEE, 2011, pp. 1–2.
  • [13] L. Li, L. Wu, X. Wang, and E. Dang, “Gated Viewing Laser Imaging with Compressive Sensing,” Applied optics, vol. 51, no. 14, pp. 2706–2712, 2012.
  • [14] L. Li, W. Xiao, and W. Jian, “Three-Dimensional Imaging Reconstruction Algorithm of Gated-Viewing Laser Imaging with Compressive Sensing,” Applied optics, vol. 53, no. 33, pp. 7992–7997, 2014.
  • [15] A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive Depth map Acquisition using a Single Photon-Counting Detector: Parametric Signal Processing meets Sparsity,” in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.   IEEE, 2012, pp. 96–102.
  • [16] M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-Pixel Three-Dimensional Imaging with Time-Based Depth Resolution,” Nature communications, vol. 7, p. 12010, 2016.
  • [17] S. Achar, J. R. Bartels, W. L. Whittaker, K. N. Kutulakos, and S. G. Narasimhan, “Epipolar Time-of-Flight Imaging,” ACM Transactions on Graphics (ToG), vol. 36, no. 4, p. 37, 2017.
  • [18] M. O’Toole, S. Achar, S. G. Narasimhan, and K. N. Kutulakos, “Homogeneous Codes for Energy-Efficient Illumination and Imaging,” ACM Transactions on Graphics (ToG), vol. 34, no. 4, p. 35, 2015.
  • [19] M. O’Toole, J. Mather, and K. N. Kutulakos, “3D Shape and Indirect Appearance by Structured Light Transport,” in Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on.   IEEE, 2014, pp. 3246–3253.
  • [20] F. J. Anscombe, “The Transformation of Poisson, Binomial and Negative-Binomial Data,” Biometrika, vol. 35, no. 3/4, pp. 246–254, 1948.
  • [21] M. Mäkitalo and A. Foi, “Optimal Inversion of the Anscombe Transformation in Low-Count Poisson Image Denoising,” IEEE transactions on Image Processing, vol. 20, no. 1, pp. 99–109, 2011.
  • [22] M. L. Larsen and A. B. Kostinski, “Simple Dead-Time Corrections for Discrete Time Series of Non-Poisson Data,” Measurement Science and Technology, vol. 20, no. 9, p. 095101, 2009.
  • [23] Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction Algorithms-Theory and Practice,” IEEE Transactions on Image Processing, vol. 21, no. 3, pp. 1084–1096, 2012 (Note: the MATLAB codes of Ref. [6] and [7] are based on the code of this journal. We therefore include this journal in our reference list).