I Introduction
Traditional imaging is based on lenses that map the scene plane to the sensor plane. In this physicsbased approach the imaging quality depends on parameters such as lens quality, numerical aperture, density of the sensor array and pixel size. Recently it has been challenged by modern signal processing techniques. Fundamentally the goal is to transfer most of the burden of imaging from high quality hardware to computation. This is known as computational imaging, in which the measurement encodes the target features; these are later computationally decoded to produce the desired image. Furthermore, the end goal is to completely eliminate the need for high quality lenses, which are heavy, bulky, and expensive.
One of the key workhorses in computational imaging is compressive sensing (CS) [1, 2]. For example, CS enabled the single pixel camera [3], which demonstrated imaging with a single pixel that captured scene information encoded with a spatial light modulator (SLM). The pixel measurement is a set of consecutive readings, with different SLM patterns. The scene is then recovered using compressive deconvolution.
Broadly, the traditional imaging and single pixel camera demonstrate two extremes: traditional cameras use a pure hardware approach whereas single pixel cameras minimize the requirement for high quality hardware using modern signal processing. There are many tradeoffs between the two approaches. One notable difference is the overall acquisition time: the physicsbased approach is done in one shot (i.e. all the sensing is done in parallel). The single pixel camera and its variants require hundreds of consecutive acquisitions, which translates into a substantially longer overall acquisition time.
Recently, timeresolved sensors enabled new imaging capabilities. Here we consider a timeresolved system with pulsed active illumination combined with a sensor with a time resolution on the order of picoseconds. Picosecond time resolution allows distinguishing between photons that arrive from different parts of the target with resolution. The sensor provides more information per acquisition (compared to regular pixel), and so fewer masks are needed. Moreover, the timeresolved sensor is characterized by a measurement matrix that enables us to optimize the active illumination patterns and reduce the required number of masks even further.
Currently available timeresolved sensors allow a wide range of potential implementations. For example, Streak cameras provide picosecond or even subpicosecond time resolution [4], however they suffer from poor sensitivity. Alternatively, Single Photon Avalanche Photodiode (SPAD) are compatible with standard CMOS technology [5] and allow time tagging with resolutions on the order of tens of picoseconds. These devices are available as a single pixel or in pixel arrays.
In this paper we present a method that leverages both timeresolved sensing and compressive sensing. The method enables lensless imaging for reflectance recovery with fewer illumination patterns compared to traditional single pixel cameras. This relaxed requirement translates to a shorter overall acquisition time. The presented framework provides guidelines and decision tools for designing timeresolved lensless imaging systems. In this framework the traditional single pixel camera is one extreme design point which minimizes the cost with simple hardware, but requires many illumination patterns (long acquisition time). Better hardware reduces the acquisition time with fewer illumination patterns at the cost of complexity. We provide sensitivity analysis of reconstruction quality to changes in various system parameters. Simulations with system parameters chosen based on currently available hardware indicate potential savings of up to fewer illumination patterns compared to traditional single pixel cameras.
Ia Contributions
The contributions presented here can be summarized as:

Computational imaging framework for lensless imaging with compressive timeresolved measurement,

Analysis of a timeresolved sensor as an imaging pixel,

Algorithm for ideal sensor placement in a defined region,

Algorithm for optimized illumination patterns.
IB Related Works
IB1 Compressive Sensing for Imaging
IB2 Single Pixel Imaging
One of the most notable applications of compressive sensing in imaging is the single pixel camera [3]. This was later extended to general imaging with masks [14]. We refer the interested reader to an introduction on imaging with compressive sampling in [15].
Other communities have also discussed the use of indirect measurements for imaging. In the physics community the concept of using a single pixel (bucket) detector to perform imaging is known as ghost imaging and was initially thought of as a quantum phenomenon [16]. It was later realized that computational techniques can achieve similar results [17]. Ghost imaging was also incorporated with compressive sensing [18, 19]. In the computational imaging community this is known as dual photography [20].
Single pixel imaging extends to multiple sensors. For example, multiple sensors were used for 3D reconstruction of a scene by using stereo reconstruction [21]. Multiple sensors were also incorporated with optical filters to create color images [22].
In this work we suggest using a timeresolved sensor instead of a regular bucket detector for lensless imaging.
IB3 TimeResolved Sensing for Imaging
Timeresolved sensing has been mostly used to recover scene geometry. This is known as LIDAR [23]. LIDAR was demonstrated with a compressive single pixel approach [24, 25]. Timeresolved sensing has also been suggested to recover scene reflectance [26, 27] for lensless imaging, but without the use of structured illumination and compressive sensing. Other examples of timeresolved sensing include nonline of sight imaging, for example imaging around a corner [28] and through scattering [29, 30]. Imaging around corners was also demonstrated with low cost timeofflight sensors, using back propagation [31] and sparsity priors [32].
In this paper we use compressive deconvolution with timeresolved sensing for lensless imaging to recover target reflectance.
IC Limitations
The main limitations of using our suggested approach are:

Our current implementation assumes a planar scene. We note that our approach can naturally extend to piecewise planar scenes and leave this extension to a future study.

Timeresolved sensing requires an active pulsed illumination source and a timeresolved sensor. These can be expensive and complicated to set up. However, as we demonstrate here, they provide a different set of tradeoffs for lensless imaging, primarily reduced acquisition time.
Ii Compressive Ultrafast Sensing
Our goal is to develop a framework for compressive imaging with timeresolved sensing. Fig. 1 shows the system overview. A target with pixels is illuminated by wavefront . is produced by spatially modulating a time pulsed source. Light reflected from the scene is measured by an omnidirectional ultrafast sensor with time resolution positioned on the sensors plane. The timeresolved measurement is denoted by , where is the number of time bins in the measurement. Better time resolution (smaller ) increases . is the measurement matrix defined by the space to time mapping that is enforced by special relativity. In the case when the time resolution is very poor, is just a single row (), and the process is reduced to the regular single pixel camera case.
We consider sensors () with time samples and illumination patterns () so the timeresolved measurement of the th sensor, for a target illuminated by the th illumination pattern, is defined by:
. Concatenating all measurement vectors results in the total measurement vector
, such that the total measurement process is:(1) 
where, is an matrix which defines the total measurement operator.
Here we invert the system defined in Eq. 1 using compressive sensing approach. To that end, we analyze and physically modify to make the inversion robust. In the remainder of this paper we analyze and optimize the following fundamental components of :

Physicsbased timeresolved light transport matrix . is a mapping from the spatial coordinates of the scene domain to the timeresolved measurement (). Section III derives a physical model of and discusses its structure and properties. can be modified by changing the sensor time resolution and position in the sensor’s plane.

Combination of multiple sensors. Multiple sensors can be placed in the sensor plane, such that each sensor will correspond to a different timeresolved light transport matrix . Section IV presents an algorithm for optimized sensor placement in the sensor’s plane.

Illumination (probing) matrix . This matrix is similar to the sensing matrix in the single pixel camera case which was realized then with an SLM. In our analysis we assume the modulation is performed on the illumination side (but note that modulating the illumination is equivalent to modulating the incoming wavefront to the sensor). The structured illumination allows modulating the illumination amplitude on different pixels in the target. Section V presents an algorithm for optimized illumination patterns for compressive ultrafast imaging.
The inversion of Eq. 1 is robust if there is little linear dependence among the columns of (so that it has sufficient numerical rank). This is evaluated by the mutual coherence [33] which is a measure for the worst similarity of the matrix columns and is defined by:
(2) 
From here on, as suggested in [34], we will use an alternative way to target the mutual coherence which is computationally tractable and defined by:
(3) 
where
is the identity matrix of size
, is with columns normalized to unity, and is the Frobenius norm. This definition directly targets the restricted isometry property (RIP) [34], which provides guarantees for using compressive sensing. In the remainder of the paper we optimize different parts of using this measure of coherence as a cost objective.Iii TimeResolved Light Transport
We start by developing a generic light transport model for timeresolved imaging. The finite speed of light governs information propagation, and provides geometrical constraints which will be used in the image formation model. These are conveniently described in a Minkowski space with the spacetime fourvector . If we consider a point source at position pulsing at time , and a sensor at position , then the spacetime interval between the source and the sensor is defined by:
(4) 
where is the speed of light. Enforcing causality and lightlike behavior requires which defines the light cone. Fig. 2 shows a schematic of the light cone and demonstrates how the same event is measured in various positions at different times.
Thus, the timeresolved measurement of a sensor at position and time of a general threedimensional time dependent scene is the integral over all points on the manifold defined by :
(5) 
where the term accounts for intensity drop off.
Next, we assume a planar scene at , sensor at and a stationary target so that is fixed and can be assumed without loss of generality. is a discretized, lexicographically ordered representation of the target reflectance map . We use the circular symmetry of the light cone so that:
(6) 
with . The intensity drop off is written as a function of time due to: . Fig. 2 shows a schematic of the light cone for this case.
The sensor’s finite time resolution corresponds to the sampling of and denoted by . A sensor positioned at location will produce a measurement , where is defined by the kernel in Eq. 6. is a mapping from a twodimensional spatial space to a time measurement (dependent on the detector position). The kernel maps rings with varying thicknesses from the scene plane to specific time bins in the measurement. The next subsection discusses the properties of this kernel.
Iiia OneDimensional Analysis
It is interesting to analyze in a planar world () with the sensor at the origin (Fig. 3). In that case Eq. 6 is simplified to:
(7) 
Fig. 3c shows an example of the corresponding matrix. This simple example demonstrates the key properties of the timeresolved measurement: 1) It is a nonlinear mapping of space to time. 2) The mapping is not unique (two opposite space points are mapped to the same time slot). 3) Spatial points that are close to the sensor are undersampled (adjacent pixels mapped to the same time slot). 4) Spatial points that are far from the sensor are oversampled but they are of a weaker signal. These properties affect imaging parameters as described next:
IiiA1 Resolution limit
The minimum recoverable spatial resolution is defined by the closest point to the sensor: , and the point that corresponds to the next time slot: , which results in :
(8) 
Fig. 4 shows a few cross sections of Eq. 8 for relevant distances and time resolution . Better time resolution is required for further scenes (for the same recoverable resolution).
IiiA2 Signal to noise ratio and dynamic range limitation
The closest point to the sensor defines the measurement gain (in order to avoid the saturation intensity ), such that , where accounts for all measurement constants. The furthest measurable point from the sensor () should result in a measurement above the noise floor: .
Since the signal to noise ratio (SNR) is proportional to the sensor gain, closer scenes will have smaller coverage areas. For example, if we assume: (for some constant ) we get: .
The combined effect of these phenomena is demonstrated in Fig. 5. In this example, we consider a ‘half plane’ (), where the target reflectance is with additive white Gaussian noise, which results in SNR. For this simple demonstration, we use the MoorePenrose pseudoinverse to invert the system, such that . The inversion shows that close to the origin () the reconstruction suffers from an undersampled measurement; this area is not sensitive to the measurement noise, and looks identical with zero noise. The noise has an obvious effect on the reconstruction further from the origin ().
IiiB Analysis of a planar scene
All the properties presented in the previous subsection extend to the case of a planar scene. Eq. 6 shows that the measurement process integrates over circles centered around the sensor. Due to the finite time resolution, the circles are mapped to rings. The rings are thinner for further points, according to , where is the time sample number and is the time of arrival from the closest point. Fig. 6 shows the ring structure for a few cases of time resolution and target distance.
Lastly, Eq. 6 provides the structure of the matrix, and guidelines for the effects of changing the sensor time resolution and position on the measurement matrix . Naturally, better time resolution will reduce the mutual coherence. An alternative to improved time resolution which might be technically challenging is to add more sensors as discussed next.
Iv Optimized TimeResolved Sensor Array
Using multiple sensors is a natural extension to the single pixel camera. In the case where the sensors are time sensitive, their positioning affects the measurement matrix and so it can be optimized. Here we derive an algorithm for sensors placement in an array in order to reduce the mutual coherence of . To simplify the array structure we constrain the sensors to be placed on a single plane and constrained to an allowed physical area. The algorithm accepts two parameters: the number of sensors and the allowed physical area , and provides the ideal positions under these constraints.
Starting with Eq. 6, the goal is to maximize the difference between and . This is achieved by choosing , which are furthest apart (to minimize overlap of the rings as shown in Fig. 6).
More precisely, the goal is to select positions within an area such that the distance between the sensors is maximized. This can be achieved by solving:
(9) 
Eq. 9 can be solved by a grid search for a small number of sensors. A more general solution is to relax the problem and follow the equivalent of a MaxLloyd quantizer [35]. The steps are as follows:

Initialize random positions in the allowed area

Repeat until convergence:

Calculate the Voronoi diagram of the point set.

Move each sensor position to the center of its cell.

This positioning algorithm is evaluated for various system parameters by assessing the effect of the sensor time resolution, number of sensors and array size (square of varying sizes) on the coherence cost objective (Eq. 3). Fig. 7 shows these results. Several key features of the system appear in this analysis: 1) Improving time resolution reduces the number of required sensors nonlinearly. 2) It is always beneficial to improve the sensors’ time resolution. 3) The sensor area defines a maximum number of useful sensors, beyond which there is no significant decrease in the mutual coherence (increasing the array size linearly reduces the mutual coherence for a fixed number of sensors). 4) It is possible to easily trade off between different aspects of the system’s hardware by traveling on the contours. For example, a decrease in the sensor time resolution can be balanced by adding more sensors. This can be useful for realizing an imaging system as sensors with lower time resolution are less expensive and easier to manufacture. The next section provides an alternative to improving hardware by adding structured illumination.
V Optimized Active Illumination for TimeResolved Sensor Array
We now make the leap to compressive sensing. Previous sections discussed single sensor considerations and sensors placement in an array. This section covers ideal active illumination patterns. We assume the illumination wavefront is amplitudemodulated; this can be physically achieved by an SLM or liquid crystal display (LCD).
When considering different illumination patterns, Hadamard patterns and random patterns sampled from Bernoulli distribution are normally chosen. Instead, we suggest patterns that directly aim to minimize the mutual coherence of the measurement matrix. The mathematical patterns may have negative values which can be represented by taking a measurement with an “all on” pattern and subtracting it from the other measurements (due to the linearity of the system)
[14].In order to optimize the illumination patterns, we follow the proposal in [34] and learn the sensing matrix (illumination patterns) in order to directly minimize the coherence of the measurement matrix. For example, this concept has been reduced to practice in [36].
The crux of the idea is that given a number of allowed illumination patterns , we choose the set of illumination patterns that minimizes the mutual coherence of the matrix . Since the illumination matrix is performing pixelwise modulation of the target, it is a diagonal matrix with the pattern values on the diagonal , where is a vector containing the th pattern values. Taking a closer look at Eq. 1, we stack all the sensor matrices into such that:
(10) 
Based on Eq. 3, the ideal patterns are the solution to:
(11) 
This can be solved with standard constrained optimization solvers. Appendix A provides the derivation for the cost function and its gradient.
Fig. 8 shows the change in mutual coherence in simulations while varying the number of allowed illumination patterns, the sensor time resolution, and the number of sensors. As predicted by CS theory, increasing the number of patterns has a strong effect on the mutual coherence. This strong effect allows one to easily relax the demands on the hardware requirements when needed. However, as more patterns are allowed, there are increasingly more dependencies on the sensors’ parameters. This demonstrates the synergy between compressive sensing and timeresolved sensing. In this case traveling on mutual coherence contours allows one to tradeoff system complexity (cost, size, power) with acquisition time (increased when more patterns are required).
Fig. 9a shows several examples of the patterns computed by solving Eq. 11. Fig. 9b demonstrates the value of the optimized patterns compared to Hadamard and random patterns sampled from Gaussian and Bernoulli (in ) distributions. For very few illumination patterns (below ten) all patterns are comparable. When allowing more illumination patterns, the optimized patterns are performing better by reducing the mutual coherence faster compared to the other approaches. As predicted, the performances of Hadamard, Gaussian and Bernoulli patterns are nearly identical.
Vi Numerical Result
This section demonstrates target reconstruction using the above analysis. The target dimensions are with pixels () and it is placed away from the detector plane. The detector array is limited to a square area of . The detector placement method used is described in section IV and the illumination patterns are computed using the algorithm suggested in section V. The measurement operator is simulated as described in section III to produce the total measurement vector. White Gaussian noise is added to the total measurement vector to produce a measurement SNR of . The targets simulated here are natural scenes (sparse in gradient domain). To invert Eq. 1 we use TVAL3 [37] (with TVL2 and a regularization parameter of for all targets). The reconstruction quality is evaluated with both Peak Signal to Noise Ratio (PSNR — higher is better, performs pointwise comparison) and Structural Similarity index (SSIM — ranges in , higher is better, takes into account the spatial structure of the image [38]).
So far, the discussion focused on reducing the mutual coherence of the measurement matrix . Fig. 10 demonstrates the effect on the full reconstruction process. The target used is the cameraman image (Fig. 11 right). The goal is to find the minimal number of illumination patterns in order to produce a reconstruction quality defined by SSIM and PSNR . This is repeated for various number of detectors with different time resolutions. The trends demonstrate a linear relationship between the number of illumination patterns and the detector time resolution needed for a specified reconstruction quality. Another notable effect is the significant gain in the transition from one to two detectors followed by a diminishing gain for additional detectors. This gain decreases as the detector time resolution improves. These trends can be useful to trade off design constraints. For example, for the specified reconstruction quality the user can choose one detector with a time resolution of and 80 patterns. The same acquisition time can be maintained with two simpler detectors of . Alternatively, two detectors with require only 40 patterns (shorter acquisition time) for equal reconstruction quality.
Finally, we compare the suggested design framework to a traditional (nontime aware) single pixel camera. This is simulated with an matrix with just one row with ones. The illumination patterns are sampled from a Bernoulli random distribution in in a similar way to the original single pixel camera experiments [3]. Fig. 11 shows the results for three different targets. Reconstructions with a traditional single pixel camera are shown in Fig. 11b for and patterns. Four different design points of compressive ultrafast imaging are demonstrated in Fig. 11c: , , , and , all with patterns (such that the acquisition time is equal). Several results are worth noting:

Reconstruction with , , and achieves perfect quality based on SSIM for all targets.

Reconstruction with , , and outperforms the traditional single pixel camera approach with fewer illumination patterns and demonstrates the potential gain of this approach.

A traditional single pixel reconstruction with patterns (same acquisition time as the compressive ultrafast imaging design points discussed) fails to recover the scene information.

There is a significant gain in performance when improving the sensor time resolution.
Vii Discussion
Section V analyzed only wavefront amplitude modulation. There are many other ways to use coded active illumination in order to minimize the measurement coherence. For example, we assumed the wavefront is just a pulse in time, but we can perform coding in time domain as well. This will cause different pixels on the target image to be illuminated at different times. Physical implementation of such delays is possible with, for example, tilted illumination and fiber bundles (notice that while phase SLM induces varying time delays on the wavefront, these time scales are shorter than current timeresolved sensors resolution). Analysis of such implementation requires detailed care with the interplay between the and matrices (since becomes timedependent); we leave this analysis to a future study.
The forward model (Eq. 5) assumes the wave nature of light is negligible. This assumption is valid if 1) Diffraction is negligible: the scene spatial features are significantly greater compared to the illumination wavelength (order of ). 2) Interference is negligible: the coherence length of the illumination source is significantly smaller compared to the geometrical features. For pulsed lasers the coherence length is inversely proportional to the pulse bandwidth; this usually results in sub coherence lengths.
The suggested approach provides a framework for lensless imaging with compressive ultrafast sensing. This framework provides the user with design tools for situations in which lensless imaging is essential. It allows the user to effectively balance available resources — an important tool since the hardware requirements can be substantial (pulsed source with structured illumination and timeresolved sensors). We note that timeresolved sensors are becoming more accessible with the recent advances in CMOS based SPAD devices (e.g. [5]). Another limitation of our approach is the requirement of known geometry. Interestingly, the approach suggested in [24, 25] requires similar hardware to recover scene geometry without reflectance, hence it might be possible to fuse the two approaches in the future.
Viii Conclusion
We demonstrated a novel compressive imaging architecture for using ultrafast sensors with active illumination for lensless imaging. We discussed analysis tools for hardware design, as well as algorithms for ideal sensor placement and illumination patterns which directly target the RIP for robust inversion with compressive deconvolution. The presented approach allows lensless imaging with single pixel and dramatically better acquisition times compared to previous results. This enables novel lensless single pixel imaging in challenging environments. The approach and analysis presented here open new avenues for other areas with potential tight coupling between novel sensors and compressive sensing algorithms.
Appendix A Illumination Pattern Optimization Algorithm
Here we provide a derivation for calculating the cost function in Eq. 11 and its gradient. Starting with the cost function to minimize:
(12) 
Define such that its th row is ( is an matrix). Our goal is to find which minimizes .
We start by writing: , and so:
(13) 
since . Next, we define where is a diagonal matrix with the inverse of the columns norm:
(14) 
This allows us to write:
(15) 
where is a diagonal matrix with the th row of on the diagonal. Next we note that:
(16) 
and:
(17) 
which can be simplified to:
(18) 
with
(19) 
Lastly, we define:
(20) 
such that:
(21) 
which allows us to write:
(22) 
where is a diagonal matrix with the diagonal entries of , and denotes element wise multiplication. Finally:
(23) 
We note that (Eq. 19) is a constant matrix for the illumination pattern optimization and can be calculated a priori. Eq. 20 and 23 provide the final expression for .
We now develop an expression for the gradient of the cost function. Considering the chain rule for matrices
[39]:(24) 
Starting with the first term, if :
(25) 
and, if :
(26) 
where is Kronecker delta. The second term in Eq. 24 is given by [39]:
(27) 
Combining Eqs. 24 through 27 we get:
(28) 
where and are the terms in Eqs. 25 and 26 respectively. After some algebra we get:
(29)  
Lastly, we define:
(30) 
where, (matrix with all ones except for zeros on the diagonal), which allows us to write the final gradient as:
(31) 
where is an matrix with all ones.
References
 [1] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, Apr. 2006.
 [2] E. J. Candes and T. Tao, “Nearoptimal signal recovery from random pojections: universal encoding strategies?” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 5406–5425, Dec. 2006.
 [3] M. Duarte, M. Davenport, D. Takhar, J. Laska, K. Kelly, and R. Baraniuk, “Singlepixel imaging via compressive sampling,” IEEE Trans. Sig. Process. Mag., vol. 25, no. 2, pp. 83–91, Mar. 2008.
 [4] K. Scheidt, “Review of streak cameras for accelerators: features, applications and results,” in Proc. of EPAC, 2000.
 [5] J. A. Richardson, L. A. Grant, and R. K. Henderson, “Low dark count singlephoton avalanche diode structure compatible with standard nanometer scale cmos technology,” IEEE Photon. Technol. Lett, vol. 21, no. 14, pp. 1020–1022, Jul. 2009.
 [6] I. August, Y. Oiknine, M. AbuLeil, I. Abdulhalim, and A. Stern, “Miniature compressive ultraspectral imaging system utilizing a single liquid crystal phase retarder,” Sci. Rep., vol. 6, p. 23524, Mar. 2016.
 [7] A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. CohenHyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev, “Sparsitybased singleshot subwavelength coherent diffractive imaging,” Nat. Mater., vol. 11, no. 5, pp. 455–459, Apr. 2012.
 [8] J. Polans, R. P. McNabb, J. A. Izatt, and S. Farsiu, “Compressed wavefront sensing,” Opt. Lett., vol. 39, no. 5, p. 1189, Mar. 2014.
 [9] D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express, vol. 17, no. 15, p. 13040, Jul. 2009.
 [10] A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: compressive imaging using a multiply scattering medium,” Sci. Rep., vol. 4, p. 489, Jul. 2014.
 [11] C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photon., vol. 8, no. 8, pp. 605–609, Jun. 2014.
 [12] B. T. Bosworth, J. R. Stroud, D. N. Tran, T. D. Tran, S. Chin, and M. A. Foster, “Highspeed compressed sensing measurement using spectrallyencoded ultrafast laser pulses,” in IEEE Information Sciences and Systems (CISS), 2015.
 [13] L. Gao, J. Liang, C. Li, and L. V. Wang, “Singleshot compressed ultrafast photography at one hundred billion frames per second,” Nature, vol. 516, no. 7529, pp. 74–77, 2014.
 [14] S. Bahmani and J. Romberg, “Compressive deconvolution in random mask imaging,” IEEE Trans. Comput. Imag., vol. 1, no. 4, pp. 236–246, Dec. 2015.
 [15] J. Romberg, “Imaging via compressive sampling,” IEEE Sig. Proc. Mag., vol. 25, no. 2, pp. 14–20, Mar. 2008.
 [16] T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of twophoton quantum entanglement,” Phys. Rev. A, vol. 52, no. 5, pp. R3429–R3432, Nov. 1995.
 [17] J. Shapiro, “Computational ghost imaging,” Phys. Rev. A, vol. 78, no. 6, p. 061802, Dec. 2008.
 [18] O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett., vol. 95, no. 13, p. 131110, 2009.
 [19] V. Katkovnik and J. Astola, “Compressive sensing computational ghost imaging,” J. Opt. Soc. Am. A Opt. Image Sci. Vis., vol. 29, no. 8, pp. 1556–1567, Aug. 2012.
 [20] P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graph., vol. 24, no. 3, pp. 745–755, Jul. 2005.
 [21] B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with singlepixel detectors.” Science, vol. 340, no. 6134, pp. 844–847, May 2013.
 [22] S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast fullcolor computational imaging with singlepixel detectors,” Opt. Express, vol. 21, no. 20, p. 23068, Oct. 2013.
 [23] B. Schwarz, “Lidar: mapping the world in 3D,” Nat. Photon., vol. 4, no. 7, pp. 429–430, Jul. 2010.
 [24] A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in timeofflight range acquisition using a single timeresolved sensor,” Opt. Express, vol. 19, no. 22, p. 21485, Oct. 2011.

[25]
A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal,
“Compressive depth map acquisition using a single photoncounting detector:
parametric signal processing meets sparsity,” in
IEEE Computer Vision and Pattern Recognition (CVPR)
, 2012.  [26] D. Wu, G. Wetzstein, C. Barsi, T. Willwacher, Q. Dai, and R. Raskar, “Ultrafast lensless computational imaging through 5D frequency analysis of timeresolved light transport,” Int. J. Comput. Vis., vol. 110, no. 2, pp. 128–140, Nov. 2014.
 [27] A. Kirmani, H. Jeelani, V. Montazerhodjat, and V. K. Goyal, “Diffuse imaging: creating optical images with unfocused timeresolved illumination and sensing,” IEEE Sig. Proc. Lett., vol. 19, no. 1, pp. 31–34, Jan. 2012.
 [28] A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering threedimensional shape around a corner using ultrafast timeofflight imaging,” Nat. Commun., vol. 3, p. 745, Jan. 2012.

[29]
G. Satat, B. Heshmat, C. Barsi, D. Raviv, O. Chen, M. G. Bawendi, and R. Raskar, “Locating and classifying fluorescent tags behind turbid layers using timeresolved inversion,”
Nat. Commun., vol. 6, 2015.  [30] G. Satat, B. Heshmat, D. Raviv, and R. Raskar, “All photons imaging through volumetric scattering,” Sci. Rep., vol. 6, 2016.
 [31] A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with timeofflight sensors,” ACM Trans. Graph., vol. 35, no. 2, p. 15, 2016.
 [32] F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive timeofflight sensors,” in IEEE Computer Vision and Pattern Recognition (CVPR), 2014.
 [33] M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Sig. Proc., vol. 55, no. 12, pp. 5695–5702, Dec. 2007.
 [34] J. M. DuarteCarvajalino and G. Sapiro, “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process., vol. 18, no. 7, pp. 1395–1408, Jul. 2009.
 [35] G. Peyré and L. D. Cohen, “Geodesic remeshing using front propagation,” Int. J. Comput. Vis., vol. 69, no. 1, pp. 145–156, Aug. 2006.
 [36] K. Marwah, G. Wetzstein, Y. Bando, and R. Raskar, “Compressive light field photography using overcomplete dictionaries and optimized projections,” ACM Trans. Graph., vol. 32, no. 4, p. 1, Jul. 2013.
 [37] C. Li, W. Yin, and Y. Zhang, “User’s guide for TVAL3: TV minimization by augmented Lagrangian and alternating direction algorithms,” CAAM Report, 2009.
 [38] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600–612, Apr. 2004.
 [39] K. B. Petersen, M. S. Pedersen, and Others, “The matrix cookbook,” Technical University of Denmark, vol. 7, 2008.
Comments
There are no comments yet.