Lensless Imaging with Compressive Ultrafast Sensing

by   Guy Satat, et al.

Lensless imaging is an important and challenging problem. One notable solution to lensless imaging is a single pixel camera which benefits from ideas central to compressive sampling. However, traditional single pixel cameras require many illumination patterns which result in a long acquisition process. Here we present a method for lensless imaging based on compressive ultrafast sensing. Each sensor acquisition is encoded with a different illumination pattern and produces a time series where time is a function of the photon's origin in the scene. Currently available hardware with picosecond time resolution enables time tagging photons as they arrive to an omnidirectional sensor. This allows lensless imaging with significantly fewer patterns compared to regular single pixel imaging. To that end, we develop a framework for designing lensless imaging systems that use ultrafast detectors. We provide an algorithm for ideal sensor placement and an algorithm for optimized active illumination patterns. We show that efficient lensless imaging is possible with ultrafast measurement and compressive sensing. This paves the way for novel imaging architectures and remote sensing in extreme situations where imaging with a lens is not possible.



There are no comments yet.


page 5

page 7


A Parallel Compressive Imaging Architecture for One-Shot Acquisition

A limitation of many compressive imaging architectures lies in the seque...

Efficient adaptation of complex-valued noiselet sensing matrices for compressed single-pixel imaging

Minimal mutual coherence of discrete noiselets and Haar wavelets makes t...

Forensic Discrimination between Traditional and Compressive Imaging Systems

Compressive sensing is a new technology for modern computational imaging...

Weighted Encoding Optimization for Dynamic Single-pixel Imaging and Sensing

Using single-pixel detection, the end-to-end neural network that jointly...

Multispectral Compressive Imaging Strategies using Fabry-Pérot Filtered Sensors

This paper introduces two acquisition device architectures for multispec...

LiSens --- A Scalable Architecture for Video Compressive Sensing

The measurement rate of cameras that take spatially multiplexed measurem...

Passive Inter-Photon Imaging

Digital camera pixels measure image intensities by converting incident l...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Traditional imaging is based on lenses that map the scene plane to the sensor plane. In this physics-based approach the imaging quality depends on parameters such as lens quality, numerical aperture, density of the sensor array and pixel size. Recently it has been challenged by modern signal processing techniques. Fundamentally the goal is to transfer most of the burden of imaging from high quality hardware to computation. This is known as computational imaging, in which the measurement encodes the target features; these are later computationally decoded to produce the desired image. Furthermore, the end goal is to completely eliminate the need for high quality lenses, which are heavy, bulky, and expensive.

One of the key workhorses in computational imaging is compressive sensing (CS) [1, 2]. For example, CS enabled the single pixel camera [3], which demonstrated imaging with a single pixel that captured scene information encoded with a spatial light modulator (SLM). The pixel measurement is a set of consecutive readings, with different SLM patterns. The scene is then recovered using compressive deconvolution.

Broadly, the traditional imaging and single pixel camera demonstrate two extremes: traditional cameras use a pure hardware approach whereas single pixel cameras minimize the requirement for high quality hardware using modern signal processing. There are many trade-offs between the two approaches. One notable difference is the overall acquisition time: the physics-based approach is done in one shot (i.e. all the sensing is done in parallel). The single pixel camera and its variants require hundreds of consecutive acquisitions, which translates into a substantially longer overall acquisition time.

Recently, time-resolved sensors enabled new imaging capabilities. Here we consider a time-resolved system with pulsed active illumination combined with a sensor with a time resolution on the order of picoseconds. Picosecond time resolution allows distinguishing between photons that arrive from different parts of the target with resolution. The sensor provides more information per acquisition (compared to regular pixel), and so fewer masks are needed. Moreover, the time-resolved sensor is characterized by a measurement matrix that enables us to optimize the active illumination patterns and reduce the required number of masks even further.

Currently available time-resolved sensors allow a wide range of potential implementations. For example, Streak cameras provide picosecond or even sub-picosecond time resolution [4], however they suffer from poor sensitivity. Alternatively, Single Photon Avalanche Photodiode (SPAD) are compatible with standard CMOS technology [5] and allow time tagging with resolutions on the order of tens of picoseconds. These devices are available as a single pixel or in pixel arrays.

In this paper we present a method that leverages both time-resolved sensing and compressive sensing. The method enables lensless imaging for reflectance recovery with fewer illumination patterns compared to traditional single pixel cameras. This relaxed requirement translates to a shorter overall acquisition time. The presented framework provides guidelines and decision tools for designing time-resolved lensless imaging systems. In this framework the traditional single pixel camera is one extreme design point which minimizes the cost with simple hardware, but requires many illumination patterns (long acquisition time). Better hardware reduces the acquisition time with fewer illumination patterns at the cost of complexity. We provide sensitivity analysis of reconstruction quality to changes in various system parameters. Simulations with system parameters chosen based on currently available hardware indicate potential savings of up to fewer illumination patterns compared to traditional single pixel cameras.

I-a Contributions

The contributions presented here can be summarized as:

  1. Computational imaging framework for lensless imaging with compressive time-resolved measurement,

  2. Analysis of a time-resolved sensor as an imaging pixel,

  3. Algorithm for ideal sensor placement in a defined region,

  4. Algorithm for optimized illumination patterns.

I-B Related Works

I-B1 Compressive Sensing for Imaging

Compressive sensing has inspired many novel imaging modalities. Examples include: ultra-spectral imaging [6], subwavelength imaging [7], wavefront sensing [8], holography [9], imaging through scattering media [10], terahertz imaging [11], and ultrafast imaging [12, 13].

I-B2 Single Pixel Imaging

One of the most notable applications of compressive sensing in imaging is the single pixel camera [3]. This was later extended to general imaging with masks [14]. We refer the interested reader to an introduction on imaging with compressive sampling in [15].

Other communities have also discussed the use of indirect measurements for imaging. In the physics community the concept of using a single pixel (bucket) detector to perform imaging is known as ghost imaging and was initially thought of as a quantum phenomenon [16]. It was later realized that computational techniques can achieve similar results [17]. Ghost imaging was also incorporated with compressive sensing [18, 19]. In the computational imaging community this is known as dual photography [20].

Single pixel imaging extends to multiple sensors. For example, multiple sensors were used for 3D reconstruction of a scene by using stereo reconstruction [21]. Multiple sensors were also incorporated with optical filters to create color images [22].

In this work we suggest using a time-resolved sensor instead of a regular bucket detector for lensless imaging.

I-B3 Time-Resolved Sensing for Imaging

Time-resolved sensing has been mostly used to recover scene geometry. This is known as LIDAR [23]. LIDAR was demonstrated with a compressive single pixel approach [24, 25]. Time-resolved sensing has also been suggested to recover scene reflectance [26, 27] for lensless imaging, but without the use of structured illumination and compressive sensing. Other examples of time-resolved sensing include non-line of sight imaging, for example imaging around a corner [28] and through scattering [29, 30]. Imaging around corners was also demonstrated with low cost time-of-flight sensors, using back propagation [31] and sparsity priors [32].

In this paper we use compressive deconvolution with time-resolved sensing for lensless imaging to recover target reflectance.

I-C Limitations

The main limitations of using our suggested approach are:

  • We assume a linear imaging model (linear modeling in imaging is common, for example [3, 14]).

  • Our current implementation assumes a planar scene. We note that our approach can naturally extend to piecewise planar scenes and leave this extension to a future study.

  • Time-resolved sensing requires an active pulsed illumination source and a time-resolved sensor. These can be expensive and complicated to set up. However, as we demonstrate here, they provide a different set of trade-offs for lensless imaging, primarily reduced acquisition time.

Ii Compressive Ultrafast Sensing

Fig. 1: Lensless imaging with compressive ultrafast sensing. a) Illumination, a time pulsed source is wavefront modulated () and illuminates a target with reflectance . b) Measurement, omnidirectional ultrafast sensor (or sensors) measures the time dependent response of the scene . is a physics-based operator that maps scene pixels to the time-resolved measurement.

Our goal is to develop a framework for compressive imaging with time-resolved sensing. Fig. 1 shows the system overview. A target with pixels is illuminated by wavefront . is produced by spatially modulating a time pulsed source. Light reflected from the scene is measured by an omnidirectional ultrafast sensor with time resolution positioned on the sensors plane. The time-resolved measurement is denoted by , where is the number of time bins in the measurement. Better time resolution (smaller ) increases . is the measurement matrix defined by the space to time mapping that is enforced by special relativity. In the case when the time resolution is very poor, is just a single row (), and the process is reduced to the regular single pixel camera case.

We consider sensors () with time samples and illumination patterns () so the time-resolved measurement of the -th sensor, for a target illuminated by the -th illumination pattern, is defined by:

. Concatenating all measurement vectors results in the total measurement vector

, such that the total measurement process is:


where, is an matrix which defines the total measurement operator.

Here we invert the system defined in Eq. 1 using compressive sensing approach. To that end, we analyze and physically modify to make the inversion robust. In the remainder of this paper we analyze and optimize the following fundamental components of :

  • Physics-based time-resolved light transport matrix .  is a mapping from the spatial coordinates of the scene domain to the time-resolved measurement (). Section III derives a physical model of and discusses its structure and properties. can be modified by changing the sensor time resolution and position in the sensor’s plane.

  • Combination of multiple sensors. Multiple sensors can be placed in the sensor plane, such that each sensor will correspond to a different time-resolved light transport matrix . Section IV presents an algorithm for optimized sensor placement in the sensor’s plane.

  • Illumination (probing) matrix . This matrix is similar to the sensing matrix in the single pixel camera case which was realized then with an SLM. In our analysis we assume the modulation is performed on the illumination side (but note that modulating the illumination is equivalent to modulating the incoming wavefront to the sensor). The structured illumination allows modulating the illumination amplitude on different pixels in the target. Section V presents an algorithm for optimized illumination patterns for compressive ultrafast imaging.

The inversion of Eq. 1 is robust if there is little linear dependence among the columns of (so that it has sufficient numerical rank). This is evaluated by the mutual coherence [33] which is a measure for the worst similarity of the matrix columns and is defined by:


From here on, as suggested in [34], we will use an alternative way to target the mutual coherence which is computationally tractable and defined by:



is the identity matrix of size

, is with columns normalized to unity, and is the Frobenius norm. This definition directly targets the restricted isometry property (RIP) [34], which provides guarantees for using compressive sensing. In the remainder of the paper we optimize different parts of using this measure of coherence as a cost objective.

Fig. 2: Schematic of the light cone for the case of a planar stationary target. a) Scene geometry, the target plane and sensor plane are separated by a distance . Three detectors are positioned at different positions in the detector plane. b) The light-like part of the light cone emanating from the target point marked with a red ’X’ defines the measurement times of the different detectors. Due to the geometry, the light cone will arrive to the detectors at different times. First it will be measured by detector I which is closest to the source, followed by detector II and III.

Iii Time-Resolved Light Transport

We start by developing a generic light transport model for time-resolved imaging. The finite speed of light governs information propagation, and provides geometrical constraints which will be used in the image formation model. These are conveniently described in a Minkowski space with the space-time four-vector . If we consider a point source at position pulsing at time , and a sensor at position , then the space-time interval between the source and the sensor is defined by:


where is the speed of light. Enforcing causality and light-like behavior requires which defines the light cone. Fig. 2 shows a schematic of the light cone and demonstrates how the same event is measured in various positions at different times.

Thus, the time-resolved measurement of a sensor at position and time of a general three-dimensional time dependent scene is the integral over all points on the manifold defined by :


where the term accounts for intensity drop off.

Fig. 3: Analysis of a one-dimensional world. a) Geometry, the target is a black line with a white patch, at a distance from the time-resolved sensor. b) The time-resolved measurement produced by the sensor. The signal start time corresponds to the patch distance, and the time duration to the patch width. c) The measurement matrix , generated from Eq. 7. Here the distance to the target is and the sensor has a time resolution of .

Next, we assume a planar scene at , sensor at and a stationary target so that is fixed and can be assumed without loss of generality. is a discretized, lexicographically ordered representation of the target reflectance map . We use the circular symmetry of the light cone so that:


with . The intensity drop off is written as a function of time due to: . Fig. 2 shows a schematic of the light cone for this case.

The sensor’s finite time resolution corresponds to the sampling of and denoted by . A sensor positioned at location will produce a measurement , where is defined by the kernel in Eq. 6. is a mapping from a two-dimensional spatial space to a time measurement (dependent on the detector position). The kernel maps rings with varying thicknesses from the scene plane to specific time bins in the measurement. The next subsection discusses the properties of this kernel.

Iii-a One-Dimensional Analysis

It is interesting to analyze in a planar world () with the sensor at the origin (Fig. 3). In that case Eq. 6 is simplified to:


Fig. 3c shows an example of the corresponding matrix. This simple example demonstrates the key properties of the time-resolved measurement: 1) It is a nonlinear mapping of space to time. 2) The mapping is not unique (two opposite space points are mapped to the same time slot). 3) Spatial points that are close to the sensor are undersampled (adjacent pixels mapped to the same time slot). 4) Spatial points that are far from the sensor are oversampled but they are of a weaker signal. These properties affect imaging parameters as described next:

Iii-A1 Resolution limit

The minimum recoverable spatial resolution is defined by the closest point to the sensor: , and the point that corresponds to the next time slot: , which results in :


Fig. 4 shows a few cross sections of Eq. 8 for relevant distances and time resolution . Better time resolution is required for further scenes (for the same recoverable resolution).

Fig. 4: Recoverable resolution with time-resolved sensing. a) Plots for various scene distances as a function of sensor time resolution . b) Plots for various sensor time resolutions as a function of target distance .

Iii-A2 Signal to noise ratio and dynamic range limitation

The closest point to the sensor defines the measurement gain (in order to avoid the saturation intensity ), such that , where accounts for all measurement constants. The furthest measurable point from the sensor () should result in a measurement above the noise floor: .

Since the signal to noise ratio (SNR) is proportional to the sensor gain, closer scenes will have smaller coverage areas. For example, if we assume: (for some constant ) we get: .

The combined effect of these phenomena is demonstrated in Fig. 5. In this example, we consider a ‘half plane’ (), where the target reflectance is with additive white Gaussian noise, which results in SNR. For this simple demonstration, we use the Moore-Penrose pseudoinverse to invert the system, such that . The inversion shows that close to the origin () the reconstruction suffers from an undersampled measurement; this area is not sensitive to the measurement noise, and looks identical with zero noise. The noise has an obvious effect on the reconstruction further from the origin ().

Iii-B Analysis of a planar scene

All the properties presented in the previous subsection extend to the case of a planar scene. Eq. 6 shows that the measurement process integrates over circles centered around the sensor. Due to the finite time resolution, the circles are mapped to rings. The rings are thinner for further points, according to , where is the time sample number and is the time of arrival from the closest point. Fig. 6 shows the ring structure for a few cases of time resolution and target distance.

Lastly, Eq. 6 provides the structure of the matrix, and guidelines for the effects of changing the sensor time resolution and position on the measurement matrix . Naturally, better time resolution will reduce the mutual coherence. An alternative to improved time resolution which might be technically challenging is to add more sensors as discussed next.

Fig. 5: Effects of averaging and noise on time-resolved measurement. a)  is a sinusoid on the positive half plane, at a distance from a sensor with time resolution and measurement noise of SNR. b) is the result of inverting the system using the Moore-Penrose pseudoinverse, which demonstrates the undersampled measurement close to the sensor, and sensitivity to noise further away from the sensor.
Fig. 6: Measurement of a planar scene for various sensor time resolutions  and target distances . The color represents time samples indexes (for the first samples). As the time resolution worsens or the target is further away, the rings become thicker. The images show a subset area of .

Iv Optimized Time-Resolved Sensor Array

Using multiple sensors is a natural extension to the single pixel camera. In the case where the sensors are time sensitive, their positioning affects the measurement matrix and so it can be optimized. Here we derive an algorithm for sensors placement in an array in order to reduce the mutual coherence of . To simplify the array structure we constrain the sensors to be placed on a single plane and constrained to an allowed physical area. The algorithm accepts two parameters: the number of sensors and the allowed physical area , and provides the ideal positions under these constraints.

Starting with Eq. 6, the goal is to maximize the difference between and . This is achieved by choosing , which are furthest apart (to minimize overlap of the rings as shown in Fig. 6).

More precisely, the goal is to select positions within an area such that the distance between the sensors is maximized. This can be achieved by solving:


Eq. 9 can be solved by a grid search for a small number of sensors. A more general solution is to relax the problem and follow the equivalent of a Max-Lloyd quantizer [35]. The steps are as follows:

  1. Initialize random positions in the allowed area

  2. Repeat until convergence:

    • Calculate the Voronoi diagram of the point set.

    • Move each sensor position to the center of its cell.

This positioning algorithm is evaluated for various system parameters by assessing the effect of the sensor time resolution, number of sensors and array size (square of varying sizes) on the coherence cost objective (Eq. 3). Fig. 7 shows these results. Several key features of the system appear in this analysis: 1) Improving time resolution reduces the number of required sensors non-linearly. 2) It is always beneficial to improve the sensors’ time resolution. 3) The sensor area defines a maximum number of useful sensors, beyond which there is no significant decrease in the mutual coherence (increasing the array size linearly reduces the mutual coherence for a fixed number of sensors). 4) It is possible to easily trade off between different aspects of the system’s hardware by traveling on the contours. For example, a decrease in the sensor time resolution can be balanced by adding more sensors. This can be useful for realizing an imaging system as sensors with lower time resolution are less expensive and easier to manufacture. The next section provides an alternative to improving hardware by adding structured illumination.

Fig. 7: Effect of number of sensors , their time resolution , and array size constraint on the mutual coherence evaluated with Eq. 3. The target size is , composed of pixels, and at a distance of from the sensor plane. a) Mutual coherence contours for a varying number of sensors and their time resolution (for fixed array size ). b) Similar to (a) with varying array size constraint (for fixed time resolution ).

V Optimized Active Illumination for Time-Resolved Sensor Array

We now make the leap to compressive sensing. Previous sections discussed single sensor considerations and sensors placement in an array. This section covers ideal active illumination patterns. We assume the illumination wavefront is amplitude-modulated; this can be physically achieved by an SLM or liquid crystal display (LCD).

When considering different illumination patterns, Hadamard patterns and random patterns sampled from Bernoulli distribution are normally chosen. Instead, we suggest patterns that directly aim to minimize the mutual coherence of the measurement matrix. The mathematical patterns may have negative values which can be represented by taking a measurement with an “all on” pattern and subtracting it from the other measurements (due to the linearity of the system) 


In order to optimize the illumination patterns, we follow the proposal in [34] and learn the sensing matrix (illumination patterns) in order to directly minimize the coherence of the measurement matrix. For example, this concept has been reduced to practice in [36].

The crux of the idea is that given a number of allowed illumination patterns , we choose the set of illumination patterns that minimizes the mutual coherence of the matrix . Since the illumination matrix is performing pixel-wise modulation of the target, it is a diagonal matrix with the pattern values on the diagonal , where is a vector containing the -th pattern values. Taking a closer look at Eq. 1, we stack all the sensor matrices into such that:


Based on Eq. 3, the ideal patterns are the solution to:


This can be solved with standard constrained optimization solvers. Appendix A provides the derivation for the cost function and its gradient.

Fig. 8: Effect of number of illumination patterns , sensor time resolution and number of sensors on the mutual coherence evaluated with Eq. 3. The target size is , composed of pixels, and at a distance of from the sensor plane. The sensor area is a square of size . a) Mutual coherence contours for a varying number of illumination patterns and sensors’ time resolution (for a fixed number of sensors ). b) Similar to (a) with a varying number of sensors (for fixed time resolution ).

Fig. 8 shows the change in mutual coherence in simulations while varying the number of allowed illumination patterns, the sensor time resolution, and the number of sensors. As predicted by CS theory, increasing the number of patterns has a strong effect on the mutual coherence. This strong effect allows one to easily relax the demands on the hardware requirements when needed. However, as more patterns are allowed, there are increasingly more dependencies on the sensors’ parameters. This demonstrates the synergy between compressive sensing and time-resolved sensing. In this case traveling on mutual coherence contours allows one to trade-off system complexity (cost, size, power) with acquisition time (increased when more patterns are required).

Fig. 9a shows several examples of the patterns computed by solving Eq. 11. Fig. 9b demonstrates the value of the optimized patterns compared to Hadamard and random patterns sampled from Gaussian and Bernoulli (in ) distributions. For very few illumination patterns (below ten) all patterns are comparable. When allowing more illumination patterns, the optimized patterns are performing better by reducing the mutual coherence faster compared to the other approaches. As predicted, the performances of Hadamard, Gaussian and Bernoulli patterns are nearly identical.

Fig. 9: The value of optimized active illumination patterns. The patterns are optimized for a target composed of pixels at a distance of from the sensor plane. The measurement is simulated with sensors and . a) Examples of several patterns computed for . b) Comparison of different active illumination methods and their effect on the mutual coherence for varying . The optimized patterns outperform Hadamard and random patterns sampled from Gaussian and Bernoulli (in ) distributions.
Fig. 10: Effect of system parameters on reconstruction quality. Various design points (different number of sensors and time resolution ) are simulated. The number of optimized illumination patterns is set as the minimal number of patterns required to achieve reconstruction quality with SSIM  and PSNR . The target used is the cameraman image (see Fig. 11 right). a) Demonstrate the trends of various number of detectors as a function of the time resolution . b) Shows the trends of different detector time resolution as a function of the number of detectors.

Vi Numerical Result

Fig. 11: Imaging with compressive ultrafast sensing for different targets. a) The target image. b) Result with regular single pixel camera with and . c) Results with compressive ultrafast sensing with for four design points with time resolution of and , and and . All reconstructions are evaluated with SSIM and PSNR. The results demonstrate the strong dependency on time resolution. Result for and shows perfect reconstruction on all targets based on SSIM. All measurements were added with white Gaussian noise such that the measurement SNR is .

This section demonstrates target reconstruction using the above analysis. The target dimensions are with pixels () and it is placed away from the detector plane. The detector array is limited to a square area of . The detector placement method used is described in section IV and the illumination patterns are computed using the algorithm suggested in section V. The measurement operator is simulated as described in section III to produce the total measurement vector. White Gaussian noise is added to the total measurement vector to produce a measurement SNR of . The targets simulated here are natural scenes (sparse in gradient domain). To invert Eq. 1 we use TVAL3 [37] (with TVL2 and a regularization parameter of for all targets). The reconstruction quality is evaluated with both Peak Signal to Noise Ratio (PSNR — higher is better, performs pointwise comparison) and Structural Similarity index (SSIM — ranges in , higher is better, takes into account the spatial structure of the image [38]).

So far, the discussion focused on reducing the mutual coherence of the measurement matrix . Fig. 10 demonstrates the effect on the full reconstruction process. The target used is the cameraman image (Fig. 11 right). The goal is to find the minimal number of illumination patterns in order to produce a reconstruction quality defined by SSIM  and PSNR . This is repeated for various number of detectors with different time resolutions. The trends demonstrate a linear relationship between the number of illumination patterns and the detector time resolution needed for a specified reconstruction quality. Another notable effect is the significant gain in the transition from one to two detectors followed by a diminishing gain for additional detectors. This gain decreases as the detector time resolution improves. These trends can be useful to trade off design constraints. For example, for the specified reconstruction quality the user can choose one detector with a time resolution of and 80 patterns. The same acquisition time can be maintained with two simpler detectors of . Alternatively, two detectors with require only 40 patterns (shorter acquisition time) for equal reconstruction quality.

Finally, we compare the suggested design framework to a traditional (non-time aware) single pixel camera. This is simulated with an matrix with just one row with ones. The illumination patterns are sampled from a Bernoulli random distribution in in a similar way to the original single pixel camera experiments [3]. Fig. 11 shows the results for three different targets. Reconstructions with a traditional single pixel camera are shown in Fig. 11b for and patterns. Four different design points of compressive ultrafast imaging are demonstrated in Fig. 11c: , , , and , all with patterns (such that the acquisition time is equal). Several results are worth noting:

  • Reconstruction with , , and achieves perfect quality based on SSIM for all targets.

  • Reconstruction with , , and outperforms the traditional single pixel camera approach with fewer illumination patterns and demonstrates the potential gain of this approach.

  • A traditional single pixel reconstruction with patterns (same acquisition time as the compressive ultrafast imaging design points discussed) fails to recover the scene information.

  • There is a significant gain in performance when improving the sensor time resolution.

Vii Discussion

Section V analyzed only wavefront amplitude modulation. There are many other ways to use coded active illumination in order to minimize the measurement coherence. For example, we assumed the wavefront is just a pulse in time, but we can perform coding in time domain as well. This will cause different pixels on the target image to be illuminated at different times. Physical implementation of such delays is possible with, for example, tilted illumination and fiber bundles (notice that while phase SLM induces varying time delays on the wavefront, these time scales are shorter than current time-resolved sensors resolution). Analysis of such implementation requires detailed care with the interplay between the and matrices (since becomes time-dependent); we leave this analysis to a future study.

The forward model (Eq. 5) assumes the wave nature of light is negligible. This assumption is valid if 1) Diffraction is negligible: the scene spatial features are significantly greater compared to the illumination wavelength (order of ). 2) Interference is negligible: the coherence length of the illumination source is significantly smaller compared to the geometrical features. For pulsed lasers the coherence length is inversely proportional to the pulse bandwidth; this usually results in sub- coherence lengths.

The suggested approach provides a framework for lensless imaging with compressive ultrafast sensing. This framework provides the user with design tools for situations in which lensless imaging is essential. It allows the user to effectively balance available resources — an important tool since the hardware requirements can be substantial (pulsed source with structured illumination and time-resolved sensors). We note that time-resolved sensors are becoming more accessible with the recent advances in CMOS based SPAD devices (e.g. [5]). Another limitation of our approach is the requirement of known geometry. Interestingly, the approach suggested in [24, 25] requires similar hardware to recover scene geometry without reflectance, hence it might be possible to fuse the two approaches in the future.

Viii Conclusion

We demonstrated a novel compressive imaging architecture for using ultrafast sensors with active illumination for lensless imaging. We discussed analysis tools for hardware design, as well as algorithms for ideal sensor placement and illumination patterns which directly target the RIP for robust inversion with compressive deconvolution. The presented approach allows lensless imaging with single pixel and dramatically better acquisition times compared to previous results. This enables novel lensless single pixel imaging in challenging environments. The approach and analysis presented here open new avenues for other areas with potential tight coupling between novel sensors and compressive sensing algorithms.

Appendix A Illumination Pattern Optimization Algorithm

Here we provide a derivation for calculating the cost function in Eq. 11 and its gradient. Starting with the cost function to minimize:


Define such that its -th row is ( is an matrix). Our goal is to find which minimizes .

We start by writing: , and so:


since . Next, we define where is a diagonal matrix with the inverse of the columns norm:


This allows us to write:


where is a diagonal matrix with the -th row of on the diagonal. Next we note that:




which can be simplified to:




Lastly, we define:


such that:


which allows us to write:


where is a diagonal matrix with the diagonal entries of , and denotes element wise multiplication. Finally:


We note that (Eq. 19) is a constant matrix for the illumination pattern optimization and can be calculated a priori. Eq. 20 and 23 provide the final expression for .

We now develop an expression for the gradient of the cost function. Considering the chain rule for matrices 



Starting with the first term, if :


and, if :


where is Kronecker delta. The second term in Eq. 24 is given by [39]:


Combining Eqs. 24 through 27 we get:


where and are the terms in Eqs. 25 and 26 respectively. After some algebra we get:


Lastly, we define:


where, (matrix with all ones except for zeros on the diagonal), which allows us to write the final gradient as:


where is an matrix with all ones.


  • [1] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
  • [2] E. J. Candes and T. Tao, “Near-optimal signal recovery from random pojections: universal encoding strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, Dec. 2006.
  • [3] M. Duarte, M. Davenport, D. Takhar, J. Laska, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Trans. Sig. Process. Mag., vol. 25, no. 2, pp. 83–91, Mar. 2008.
  • [4] K. Scheidt, “Review of streak cameras for accelerators: features, applications and results,” in Proc. of EPAC, 2000.
  • [5] J. A. Richardson, L. A. Grant, and R. K. Henderson, “Low dark count single-photon avalanche diode structure compatible with standard nanometer scale cmos technology,” IEEE Photon. Technol. Lett, vol. 21, no. 14, pp. 1020–1022, Jul. 2009.
  • [6] I. August, Y. Oiknine, M. AbuLeil, I. Abdulhalim, and A. Stern, “Miniature compressive ultra-spectral imaging system utilizing a single liquid crystal phase retarder,” Sci. Rep., vol. 6, p. 23524, Mar. 2016.
  • [7] A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev, “Sparsity-based single-shot subwavelength coherent diffractive imaging,” Nat. Mater., vol. 11, no. 5, pp. 455–459, Apr. 2012.
  • [8] J. Polans, R. P. McNabb, J. A. Izatt, and S. Farsiu, “Compressed wavefront sensing,” Opt. Lett., vol. 39, no. 5, p. 1189, Mar. 2014.
  • [9] D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express, vol. 17, no. 15, p. 13040, Jul. 2009.
  • [10] A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep., vol. 4, p. 489, Jul. 2014.
  • [11] C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photon., vol. 8, no. 8, pp. 605–609, Jun. 2014.
  • [12] B. T. Bosworth, J. R. Stroud, D. N. Tran, T. D. Tran, S. Chin, and M. A. Foster, “High-speed compressed sensing measurement using spectrally-encoded ultrafast laser pulses,” in IEEE Information Sciences and Systems (CISS), 2015.
  • [13] L. Gao, J. Liang, C. Li, and L. V. Wang, “Single-shot compressed ultrafast photography at one hundred billion frames per second,” Nature, vol. 516, no. 7529, pp. 74–77, 2014.
  • [14] S. Bahmani and J. Romberg, “Compressive deconvolution in random mask imaging,” IEEE Trans. Comput. Imag., vol. 1, no. 4, pp. 236–246, Dec. 2015.
  • [15] J. Romberg, “Imaging via compressive sampling,” IEEE Sig. Proc. Mag., vol. 25, no. 2, pp. 14–20, Mar. 2008.
  • [16] T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A, vol. 52, no. 5, pp. R3429–R3432, Nov. 1995.
  • [17] J. Shapiro, “Computational ghost imaging,” Phys. Rev. A, vol. 78, no. 6, p. 061802, Dec. 2008.
  • [18] O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett., vol. 95, no. 13, p. 131110, 2009.
  • [19] V. Katkovnik and J. Astola, “Compressive sensing computational ghost imaging,” J. Opt. Soc. Am. A Opt. Image Sci. Vis., vol. 29, no. 8, pp. 1556–1567, Aug. 2012.
  • [20] P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graph., vol. 24, no. 3, pp. 745–755, Jul. 2005.
  • [21] B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors.” Science, vol. 340, no. 6134, pp. 844–847, May 2013.
  • [22] S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express, vol. 21, no. 20, p. 23068, Oct. 2013.
  • [23] B. Schwarz, “Lidar: mapping the world in 3D,” Nat. Photon., vol. 4, no. 7, pp. 429–430, Jul. 2010.
  • [24] A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express, vol. 19, no. 22, p. 21485, Oct. 2011.
  • [25] A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: parametric signal processing meets sparsity,” in

    IEEE Computer Vision and Pattern Recognition (CVPR)

    , 2012.
  • [26] D. Wu, G. Wetzstein, C. Barsi, T. Willwacher, Q. Dai, and R. Raskar, “Ultra-fast lensless computational imaging through 5D frequency analysis of time-resolved light transport,” Int. J. Comput. Vis., vol. 110, no. 2, pp. 128–140, Nov. 2014.
  • [27] A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: creating optical images with unfocused time-resolved illumination and sensing,” IEEE Sig. Proc. Lett., vol. 19, no. 1, pp. 31–34, Jan. 2012.
  • [28] A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun., vol. 3, p. 745, Jan. 2012.
  • [29]

    G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using time-resolved inversion,”

    Nat. Commun., vol. 6, 2015.
  • [30] G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep., vol. 6, 2016.
  • [31] A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph., vol. 35, no. 2, p. 15, 2016.
  • [32] F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in IEEE Computer Vision and Pattern Recognition (CVPR), 2014.
  • [33] M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Sig. Proc., vol. 55, no. 12, pp. 5695–5702, Dec. 2007.
  • [34] J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process., vol. 18, no. 7, pp. 1395–1408, Jul. 2009.
  • [35] G. Peyré and L. D. Cohen, “Geodesic remeshing using front propagation,” Int. J. Comput. Vis., vol. 69, no. 1, pp. 145–156, Aug. 2006.
  • [36] K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph., vol. 32, no. 4, p. 1, Jul. 2013.
  • [37] C. Li, W. Yin, and Y. Zhang, “User’s guide for TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms,” CAAM Report, 2009.
  • [38] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
  • [39] K. B. Petersen, M. S. Pedersen, and Others, “The matrix cookbook,” Technical University of Denmark, vol. 7, 2008.