Compressive Holographic Video

10/27/2016 ∙ by Zihao Wang, et al. ∙ 1

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 9

page 10

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

References

  • [1] E. Candes and T. Tao, ”Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inform. Theory 52(12) 489-509 (2006).
  • [2] E. Candes and T. Tao, ”Near-optimal signal recovery from random projections: Universal encoding strategies?,” IEEE Trans. Inform. Theory 52(12) 5406-5425 (2006).
  • [3] D. L. Donoho, ”Compressed sensing,” IEEE Trans. Inform. Theory 52(4) 1289–1306 (2006).
  • [4] M. Lustig, D. L. Donoho, J. M. Santos and J. M. Pauly, ”Compressed sensing MRI” IEEE Signal Processing Magazine 25(2), 72–82 (2008).
  • [5] L. Gan, ”Block compressed sensing of natural images,” International Conference on Digital Signal Processing, 403–406 (2007).
  • [6] D. Brady, K. Choi, D. Marks, R. Horisaki and S. Lim, ”Compressive holography”17(15), 13040–13049 (2009)
  • [7] D. Gabor, ”A new microscopic principle,” Nature 161(4098), 777–778 (1948).
  • [8] P. Memmolo, L. Miccio, M. Paturzo, G. Di Caprio, G. Coppola, P. A. Netti and P. Ferraro, ”Recent advances in holographic 3D particle tracking,”7(4), 713–755 (2015).
  • [9] L. Xu, X. Peng, J. Miao and A. K. Asundi, ”Studies of digital microscopic holography with applications to microstructure testing,”40(28), 5046–5051 (2001).
  • [10] T. Su, L. Xue and A. Ozcan, ”High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proceedings of the National Academy of Sciences 109(40), 16018–16022 (2012).
  • [11] Q. Lü, Y. Chen, R. Yuan, B. Ge, Y. Gao, and Y. Zhang, ”Trajectory and velocity measurement of a particle in spray by digital holography,” Appl. Opt. 48, 7000–7007 (2009).
  • [12] L. Dixon, F. C. Cheong, and D. G. Grier, ”Holographic deconvolution microscopy for high-resolution particle tracking,” 19, 16410–16417 (2011).
  • [13] M. J. Saxton and K. Jacobson, ”Single-particle tracking: applications to membrane dynamics,” Annual Review of Biophysics and Biomolecular Structure 26(1), 373–399 (1997).
  • [14] J. Katz and J. Sheng, ”Applications of holography in fluid mechanics and particle dynamics,” Annual Review of Fluid Mechanics 42, 531–555 (2010).
  • [15] L. Tian, N. Loomis, J. A. Domínguez-Caballero and G. Barbastathis, ”Quantitative measurement of size and three-dimensional position of fast-moving bubbles in air-water mixture flows using digital holography,”49(9), 1549–1554 (2010).
  • [16] W. Xu, M. H. Jericho, H. J. Kreuzer and I. A. Meinertzhagen, ”Tracking particles in four dimensions with in-line holographic microscopy,”28(3), 164–166 (2003).
  • [17] B. J. Nilsson and T. E. Carlsson, ”Simultaneous measurement of shape and deformation using digital light-in-flight recording by holography,” Opt. Eng. 39, 244–253 (2000).
  • [18] M. K. Kim, Digital Holographic Microscopy Springer, (2011).
  • [19] J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho and H. J. Kreuzer, ”Digital in-line holographic microscopy”45(5), 836–850 (2006).
  • [20] W. Xu, M. H. Jericho, I. A. Meinertzhagen and H. J. Kreuzer, ”Digital in-line holography for biological applications,” Proceedings of the National Academy of Sciences 98(20), 11301–11305 (2001).
  • [21] X. Yu, J. Hong, C. Liu and M. K. Kim, ”Review of digital holographic microscopy for three-dimensional profiling and tracking,” Optical Engineering 53(11), 112306–112306 (2014).
  • [22] N. Salah, G. Godard, D. Lebrun, P. Paranthoën, D. Allano and S. Coëtmellec, ”Application of multiple exposure digital in-line holography to particle tracking in a Bénard–von Kármán vortex flow,” Measurement Science and Technology 19(7), 074001 (2008).
  • [23] X. Yu, J. Hong, C. Liu, and M. K. Kim, ”Review of digital holographic microscopy for three dimensional profiling and tracking,” Opt. Eng. 53, 112306 (2014).
  • [24] S. Lim, D. L. Marks, and D. J. Brady, ”Sampling and processing for compressive holography,” 50, H75–H86 (2011).
  • [25] Y. Rivenson, A. Stern and B. Javidi, ”Compressive Fresnel holography,” Journal of Display Technology 6(10), 506–509 (2010).
  • [26] Y. Liu, L. Tian, J. W. Lee, H. Y. Huang, M. S. Triantafyllou and G. Barbastathis, ”Scanning-free compressive holography for object localization with subpixel accuracy,”37(16), 3357–3359 (2012).
  • [27] J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder and H. Lee, ”Sparsity-Based Pixel Super Resolution for Lens-Free Digital In-line Holography,” Scientific Reports 6, (2016).
  • [28] J. Hahn, S. Lim, K. Choi, R. Horisaki, and D. J. Brady, ”Video-rate compressive holographic microscopic tomography,” 19, 7289–7298 (2011).
  • [29] M. M. Marim, M. Atlan, E. Angelini, and J.-C. Olivo-Martin, ”Compressed sensing with off-axis frequency-shifting holography,” 35, 871–873 (2010).
  • [30] C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, ”Millimeter-wave compressive holography,” 49, E67–E82 (2010).
  • [31] R. Horisaki, Y. Ogura, M. Aino and J. Tanida, ”Single-shot phase imaging with a coded aperture,”39(22), 6466–6469 (2014).
  • [32] R. Egami, R. Horisaki, L. Tian and J. Tanida, ”Relaxation of mask design for single-shot phase imaging with a coded aperture,”55(8), 1830–1837 (2016).
  • [33] R. Raskar, A. Agrawal and J. Tumblin, ”Coded exposure photography: motion deblurring using fluttered shutter,” ACM Transactions on Graphics (TOG) 25(3), 795–804 (2006).
  • [34] G. Bub, M. Tecza, M. Helmes, P. Lee, and P. Kohl, ”Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging,” Nature Methods 7, 209–211 (2010).
  • [35]

    M. Gupta, A. Agrawal, A. Veeraraghavan and S. G. Narasimhan, ”Flexible voxels for motion-aware videography,” European Conference on Computer Vision, 100–114 (2010).

  • [36]

    D. Reddy, A. Veeraraghavan and R. Chellappa, ”P2C2: Programmable pixel compressive camera for high speed imaging,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 329–336 (2011).

  • [37] D. Liu, J. Gu, Y. Hitomi, M. Gupta, T. Mitsunaga and S. K. Nayar, ”Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging,” IEEE Transactions on Pattern Analysis and Machine Intelligence 36(2), 248–260 (2014).
  • [38] R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O. Cossairt, G. Schuster and A. K. Katsaggelos, ”High spatio-temporal resolution video with compressed sensing,”23(12), 15992–16007 (2015).
  • [39] P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, ”Coded aperture compressive temporal imaging,” 21, 10526–10545 (2013).
  • [40] J. M. Bioucas-Dias and M. A. Figueiredo, ”A new twist: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Transctions on Image processing 16, 2992–3004 (2007).
  • [41] ”DLP LightCrafterTM 4500,” http://www.ti.com/lsds/ti/dlp/advanced-light-control/microarray-greater-than-1million-lightcrafter4500.page. Accessed: 2016-06-01.
  • [42] C. P. McElhinney, J. B. McDonald, A. Castro, Y. Frauel, B. Javidi, and T. J. Naughton, ”Depth-independent segmentation of macroscopic three-dimensional objects encoded in single perspectives of digital holograms,” 32, 1229–1231 (2007).

1 Introduction

In recent years, people have witnessed a great interest in exploiting the redundant nature of signals. The redundancy of acquired signals provides the opportunity to sample data in a compressive approach. Candès et al. [1, 2] and Donoho [3]

have discussed the high probability of reconstructing signals with high fidelity from few random measurements, provided that they are sparse or compressible in a transformation basis. Since then, the theory of compressed sensing (CS) has been widely applied to computational imaging. Lustig et al. 

[4] described the natural fit of CS to magnetic resonance imaging (MRI). Gan [5] proposed block compressed sensing method for natural images, which is applicable for low-power, low-resolution imaging devices. Brady et al. [6] showed that holography can be viewed as a simple spatial encoder for CS and demonstrated 3D tomography from 2D holographic data.

Gabor’s invention of holography in 1948 [7] has provided an effective method for recording and reconstructing a 3D light field from a captured 2D hologram. The use of a CCD camera to digitally record holographic interference patterns has made digital holography (DH) an emerging technology with a variety of imaging applications, such as particle imaging, tracking in biomedical microscopy [8, 9, 10, 11, 12] and physical process profiling and measuring [13, 14, 15, 16, 17]. Digital Gabor/in-line holography (DIH) is a simple, lensless, yet effective setup for capturing holograms. The simplicity of DIH is balanced by the requirement that objects be small enough to avoid occluding the reference beam significantly [18]. Extensive discussions and applications of DIH have been focused on microscopic imaging, i.e. small and fast-moving objects [19, 20, 21]. The tracking of fast movements usually entails multiple exposures [8, 10, 15, 16, 22, 23]. Temporal resolution is usually limited to the 10-100 millisecond range and little research has been conducted on temporal compression. However, in recent years, CS has proved a useful tool to increase the spatial information encoded in DH [6, 24]. Rivenson et al. [25] discussed the application of CS to digital Fresnel holography. Liu et al. [26] and Song et al. [27] improved subpixel accuracy for object localization and enhanced spatial resolution (super-resolution). Furthermore, CS theory has proven successful for recovering scenes under holographic microscopic tomography [28], off-axis frequency-shifting holography [29] as well as millimeter-wave holography [30]. Coded apertures have also been used together with CS to provide robust solutions for snapshot phase retrieval [31, 32]. In view of recent research in CS and DH, several natural questions arise: Can we extend coded aperture to coded exposure? Can we exploit the unused pixels in exchange for increased temporal resolution? Since holography is naturally suitable for recovering depth information, a further research question is whether 4D space-time information can be extracted from 2D data employing the CS framework.

Similar discussions have been initiated in the incoherent imaging regime. Leveraging multiplexing schemes in the temporal domain, e.g. coded exposure, has been demonstrated as an effective hardware strategy for exploiting spatiotemporal trade-offs in modern cameras. High speed sensors usually require high light sensitivity and large bandwidth due to their limited on-board memory. In 2006, Raskar et al. [33] pioneered the concept of coded exposure when he introduced the flutter shutter camera for motion deblurring. The technique requires knowledge of motion magnitude/direction and cannot handle general scenes exhibiting complex motion. Bub et al. [34] designed an amphibious (both microscopic and macroscopic) high speed imaging system using a DMD (digital micromirror device) for temporal pixel multiplexing. Gupta et al. [35] showed how per-pixel temporal modulation allows flexible post-capture spatiotemporal resolution trade-off. Reddy et al. [36] used sparse representations (spatial) and brightness constancy (temporal) to preserve spatial resolution while achieving higher temporal resolution. Liu et al. [37] used an over-complete dictionary to sparsely represent time-varying scenes. Koller et al. [38] discussed several mask patterns and proposed a translational photomask to encode scene movements extending the work of [39]. These methods have proved successful for reconstructing fast moving scenes by combining cheap low frame-rate cameras with fast spatio-temporal modulating elements. While all of these techniques enable high speed reconstruction of 2D motion, incorporating holographic capture offers the potential to extend the capabilities to 3D motion. Moreover, in many holography setups, the energy from each scene is distributed across the entire detector so that each pixel contains partial information about the entire scene. This offers the potential for improved performance relative to incoherent architectures.

Our work exploits both spatial and temporal redundancy in natural scenes and generalizes to a 4D (3D positon with time) system model. We show that by combining digital holography and coded exposure techniques using a CS framework, it is feasible to reconstruct a 4D moving scene from a single 2D hologram. We demonstrate temporal super resolution and anticipate optical sectioning for 1 cm. As a test case, we focus on macroscopic scenes exhibiting fast motion of small objects (vibrating bars or small particles, etc.).

2 Generalized system model

Digital Gabor holography requires no separation of the reference beam and the object beam. The object is illuminated by a single beam and the portion that is not scattered by the object serves as the reference beam. This concept leads to a simple experimental setup but demands limited object sizes so that the reference beam is not excessively disturbed. In this case, the imaging process is a recording of the diffraction pattern of a 2D aperture.

2.1 Diffraction theory

We first model diffraction in a 2D aperture case. According to Fresnel-Kirchoff diffraction formula [18], the field at each observation point in a 2D aperture can be written as

(1)

where denotes the distance from at the input plane ,with input field , to at the output plane, i.e. . In the second line of Eq. (1) we make a further approximation of in the denominator, but not in the exponent. The integral then becomes a convolution,

(2)

with the kernel

(3)

The kernel is also referred to as the point spread function (PSF). Since the propagation is along the -axis, the form of the kernel is determined by the propagation distance .

Figure 1: 4D holographic model. : projection of a 4D field at , the -th depth plane has a distance of to ; : temporal coded mask located at ; : captured image with an integral over . The sensor is located at .

2.2 4D model

We now extend our analysis to a 4-dimensional model. As illustrated in Fig. 1, consider a 4D field , which propagates along the positive -direction. Along the propagation path, a high-speed coded mask is located at . A sensor is placed on the sensing plane . In one frame, the sensor captures the intensity of the field during an exposure time of . The volume can be discretized into planes, with the furthest plane having a distance of with respect to the observation plane at . In Gabor holography, the object beam and the reference beam overlap with each other. This requires the objects to be sparse so that the occlusion of the reference beam is negligible. Under this assumption, the field in reality represents the summation of the object field and the constant reference field. Thus, the field at is

(4)

where denotes the convolutional kernel for distance .

At the sensing plane, during one exposure time , the sensed image can be expressed as an integral of the intensity of the field as

(5)

Equation (5) describes the continuous form of the sensing process. However, the mask is operated in a discrete form at high frame rates. Suppose that for each sensor frame the coded mask changes times at equal intervals of during . Then the discretized form of is

(6)

where we denote and as the transformed field at the capture plane for each time frame .

Then the captured intensity term can be expanded as . (Time frame notation is omitted here.) In [6], Brady et al. neglected the nonlinearity imposed by the squared magnitude and considered the two terms (often referred to as noise and zero-order/DC term) as noise in the measurement model showing that they can be eliminated algorithmically using a CS reconstruction algorithm. In this work, we follow the same approach and the measured intensity can be expressed as

(7)

where combines and into a single term considered as error. If we assume the object to be sparse, then the term is negligible. Thus, the error term can be approximately treated as . In experiment, we approximate this error term by recording the background image and subtract the scene image by this background for reconstruction.

Now we assume that the sensor pixels have the same dimensions as the mask pixels, the unknown field will have spatial dimensions , depth dimension and temporal dimension . Further, if we represent the convolutional operations in Eq. (6) as Toeplitz matrices, we can obtain the following compact form

(8)

where notation and dimensions of the introduced variables are summarized in Table 1 and describes the complete forward model.

In order to reconstruct the 4D volume, an optimization problem is formed as

(9)

where is a regularization parameter and is a regularizer on the unknown 4D field .

In this work, we employ Total-Variation (TV) as the regularization function defined as

(10)

where we note here that

is the vectorized version of the unknown 4D object field

. Eq. 10 is a generalized 4D TV regularizer. However, the choice of regularizer may vary by different purposes of reconstruction and/or properties of scenes. In experiment, a 3D TV (x, y, z) is used for resolving depths (Fig. 3). TV on temporal domain is included for recovering subtle movement (Fig. 5). Also note that independent regularization parameters may be chosen for the spatial (x, y, z) and time (t) dimensions. We used Two-step IST (TwIST) algorithm [40] for reconstruction.

Table 1: Analysis of all the variables appearing in Eq. (8).

3 Experimental

3.1 Setup

Fig. 2 shows the schematic of the experimental setup. The illumination is produced by a diode laser with wavelength of 532 nm. The input beam is expanded and collimated by an ND filter and a collimating lens set (plano-convex lens, magnification, ND filter omitted). A digital micromirror device (DMD) is used to perform pixel-wise temporal modulation of the light field. For our experiments, we used the DLP®LightCrafter 4500™from Texas Instruments Inc. The light engine includes a 0.45-inch DMD with million mirrors, each 7.6 , arranged in 912 columns by 1140 rows in a diamond pixel array geometry  [41]. The DMD is placed  70mm distance away from the objects. An objective lens (single lens) is placed in front of the CMOS sensor and well-aligned with the DMD so that it images the DMD plane onto the sensor. The lens introduces a quadratic phase factor inside the integral of Eq. 1. Thus, if the sensor is placed a distance of from the OL, the phase is the same as from the lens. In this way, from Eq. 9

reduces to the identity matrix. We used a CMOS monochromatic sensor with a resolution of

with a pixel pitch of . The key factor is the synchronization between the DMD and the sensor. Each DMD pattern can be projected as fast as with an effective pattern exposure of . After N patterns are projected, a trigger signal is sent out to the camera which controls the shutter and results in a single exposure.

Figure 2: Schematic of the experimental setup. (PG: Pulse Generator; DL: Diode Laser; CL: Collimating Lens; DMD: Digital Micromirror Device; OL: Objective Lens. A trigger signal generated from the DMD is sent to the camera for exposure. The minimum time between successive DMD mask patterns is with a pattern exposure . The camera is triggered every N patterns.

3.2 Subsampling holograms

We start our experiment by examining the reconstruction performance of subsampled holograms. Recovery of a 3D object field from a 2D hologram has been proposed in previous work [6]

. The recovery can be treated as inference of high-dimensional data from undersampled measurements. Fig. 

3 shows the experimental results of 3D recovery with pixel-wise subsampling. For this experiment, we captured two static hairs from craft fur (see Fig. 5a) placed a distance of 7.1 cm and 10.1 cm away from the sensor. Fig. 3 (a) shows the captured image. To preprocess the captured hologram, first we capture an image on the sensor with no object placed in the field of view - we refer to this as the background image. Note that this captured image corresponds to the term in Eq. 7. We then subtract the hologram by the background image, down-sampled to and cropped the central ROI around the object. Fig. 3 (b) shows the captured image of one pattern from the DMD. Each pattern randomly selects 10% of the entire image. To avoid aliasing artifacts caused by the diamond shaped sampling patterns on the DMD, we group together adjacent pixels on the DMD to make a single superpixel [41]. In our reconstructions, to form the matrix A from Eq. 8, we capture images of the mask with no object present. These captured images are divided by the background image to remove the effect of beam non-uniformity. Fig. 3 (c) shows the subsampled hologram. Fig. 3 (d) & (e) compares reconstructions for the full hologram and subsampled hologram. The image () was reconstructed into a 3D volume (

) with a depth range from 65 mm to 108 mm. Shown are the images reconstructed at the depth planes corresponding to the location of the two hairs. In order to quantify the performance in terms of depth resolution, we used block variance

[42] for the edge pixel of the cross section by the two hairs. Higher variance infers higher contrast, and thus, higher resolution. The block variance was computed within a window of pixels highlighted as blue and red in Fig. 3 (d) & (e). Fig. 3 (f) shows the normalized variance versus depth from sensor. Two principle peaks are observed and can be inferred as the focus distance for the two furs. The peak around has strong signal in all four curves. This was because the object located there has larger size than the other one. As can be seen, using only 10% of the data deteriorates both BP and CS reconstruction resolutions. And in 10% from BP, it is even harder to track the second object because of the impact of mask pattern. This can also be observed in the left panel of (e) where the back propagation of the mask severely affected the objects. The variance decreases fast in CS reconstructions. This implies the denoising effect as well as the optical sectioning power of CS. In 10% reconstruction, the intermediate volume between the two objects were not denoised as good as in ”100%” case. This shows that greater subsampling factors reduce the effective depth resolution.

Figure 3: Subsampling holograms (background subtracted). (a)hologram of two static furs 7.1 cm and 10.1 cm away from sensor. (b) DMD mask, 10%, uniformly random (background divided). (c) subsampled hologram. (d) Comparison of reconstructions from both back-propagation (BP) method and compressed sensing (CS) method using the full hologram. (e) Comparison for BP and CS using 10% subsampled hologram. (f) Normalized variance vs. distance on z direction. Blue series: BP; red series: CS; full curve: 100% hologram; dashed curve: 10% hologram.

3.3 Temporal multiplexing

In the previous section, we analyzed the effect of subsampling on reconstruction performance for compressive holography. Here we show how to utilize the excess pixel bandwidth in the sensor to increase temporal resolution. A simulation experiment was carried out in order to quantitatively analyze our imaging system (Fig. 4). As shown in Fig. 4(a), two layers of objects (peranema with different scales) were used as a test case. Each layer had pixels. The pixel pitch was set so that the whole scene size () was approximately identical to the DMD size. The first object was placed at away from the sensor. The other object was placed further away from the first object. is a changing variable. A spatiotemporal subsampling mask was displayed on the DMD. For example, when time frames are required, each frame will have of the pixels being randomly selected and displayed. In this way, the summation of frames is the full resolution scene image. In simulation, we omitted the propagation between the DMD and the sensor. For reconstruction, we compared back-propagation and compressed sensing. In order to have a better reconstruction result, we inserted 4 intermediate planes between the two objects. The results are shown in Fig. 4(b) and (c). In (b), the peak signal-to-noise ratio (PSNR) was used to measure the reconstruction performance. , where is the mean-squared error between the reconstruction and the input object field. The PSNR is computed on the 4D volume, which can also be treated as an average over multiple time frames. The higher PSNR value is, the better fidelity the reconstruction is. We picked out a point from (b), marked as red ring, to show in (c) the visual meaning of the PSNR values. It can be seen that lower rate of subsampling causes worse reconstruction performance. PSNR also decreases with the decrease of object spacing.

Figure 4: Simulated reconstruction performance for compressive holographic video. (a) simulation setup; (b) joint resolution (spatial and temporal). PSNR in dB. Lines represent CS results and dashed lines represent BP results. ”100%”: full resolution; ”50%”: temporal increase of 2; ”20%”: temporal increase of 5; ”10%”: temporal increase of 10. (c) reconstruction results at depth . Marked as red circle in (b).

3.4 Spatiotemporal recovery for fast-moving objects

We present two illustrative examples which are aimed for the application of observing subtle vibrations and tracking small-but-fast-moving particles.

Fig. 5 shows a reconstruction result demonstrating a increase in temporal resolution. The captured image contains several strands of hair blown by an air conditioner. From a single captured image, we reconstruct 2 depth slices and 10 frames of video. In the case of small lateral movement, i.e. vibration, it is feasible to apply total variation on time domain. For the convenience of comparison, 3 time frames (3rd, 6th, 9th) are shown for both back-propagation and compressed sensing. In terms of depth, our CS result shows well-separated objects at different depth layers while the back-propagation method fails to achieve optical sectioning. The movement of the object is also recovered in our CS result.

Figure 5: Reconstruction results from a single image (moving hairs). 10 frames of video and two depth frames are reconstructed from a single captured hologram. Due to space constraints, 3 video frames (3rd, 6th, 9th) and two depths () are presented. (Supplemental videos included.)

Fig. 6 shows another reconstruction result for dropping several flakes of glitter. The glitter flakes in Fig. 6 (a) had size of 1 mm and were dropped in a range of 60 mm to 80 mm away from sensor. The glitter flakes were also blown by an air conditioner. (b) shows the captured single image. (c) shows preprocessed image which is subtracted by background image. In this case, the glitter flakes were moving at high speed. There was no overlap between two consecutive frames for the same flake. So the each frame was recovered independently. (d) shows a reconstruction map of 2 depths and 4 time frames. The downward and leftward motion of two glitter flakes can be observed. A similar refocusing method was used as in [42]. Here, we scanned the reconstructed image by a window and computed the variance (normalized) to get the focused depth information. If the normalized variance at defocused depth are higher than 0.5, that pixel was rejected as background/noise. For adjacent pixels which have similar variance profile, the pixels were treated as a single particle. (e) shows the normalized variance for two particles at and . The particles are tracked at two locations pointed out by the arrows in (d). The overall tracking results are shown in Fig. 6(f) and (g). 7 particles are detected with 4D motion within 5 ms. In (f), the temporal transition was represented by 10 different color arrows. (g) shows a velocity chart of the 7 particles. The velocity of each particle was computed by , where depicts the 3D location at -th time frame, . The velocity of the particles ranges from 0.7 m/s to 5.5 m/s.

Figure 6: Reconstruction results from a single image (dropping glitters). (a) Glitters; (b) captured image; (c) normalized image; (d) reconstruction map. 2 depths and 4 out of 10 frames are shown; (e) normalized variance plot from 2 particles at and ; (f) 4D particle tracking; (g) velocity plotting with time range from to .

4 Conclusion

We have demonstrated two illustrative cases where 4D spatio-temporal data is recovered from a single captured 2D hologram. In the case of vibrating hairs, 2 depth layers and 10 video frames in time were recovered. In the case of dropping glitter flakes, a 4D volume was reconstructed to track the motion of small particles. The prototype showed that it is possible to simultaneously exceed the capture rate of imagers and recover multiple depths with reasonable depth resolution. The coded-exposure technique enables high speed imaging with a simple frame rate camera. Digital in-line holography brings the capability of 3D tomographic imaging with simple experimental setup. Our Compressive Holographic Video technique is also closely related to phase retrieval problems commonly faced in holographic microscopy. Our space-time subsampling technique can be viewed as a sequence of coded apertures applied to a spatiotemporally varying optical field. In the future we plan to explore the connections between our CS reconstruction approach and the methods introduced in [31].

Acknowledgement

The authors were grateful for the constructive discussions with Dr. Roarke Horstmeyer and Donghun Ryu. This work was funded in part by NSF CAREER grant IIS-1453192, ONR grant 1(GG010550)//N00014-14-1-0741, and ONR grant #N00014-15-1-2735.