Sampling Without Time: Recovering Echoes of Light via Temporal Phase Retrieval

01/27/2017 ∙ by Ayush Bhandari, et al. ∙ MIT 0

This paper considers the problem of sampling and reconstruction of a continuous-time sparse signal without assuming the knowledge of the sampling instants or the sampling rate. This topic has its roots in the problem of recovering multiple echoes of light from its low-pass filtered and auto-correlated, time-domain measurements. Our work is closely related to the topic of sparse phase retrieval and in this context, we discuss the advantage of phase-free measurements. While this problem is ill-posed, cues based on physical constraints allow for its appropriate regularization. We validate our theory with experiments based on customized, optical time-of-flight imaging sensors. What singles out our approach is that our sensing method allows for temporal phase retrieval as opposed to the usual case of spatial phase retrieval. Preliminary experiments and results demonstrate a compelling capability of our phase-retrieval based imaging device.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

During a recent visit to meet a collaborator, the author (who happens to be an avid photographer) saw a stark reflection of a local monument on the window panes of the Harvard science center. This is shown in Fig. 1. This is a common place phenomenon at a macro scale as well as a micro scale (for example microscopy). At the heart of this problem is a fundamental limitation, that is, all of the conventional imaging sensors are agnostic to the time information. Alternatively stated, the image–formation process is insensitive to the potentially distinct times that the photons can spend traveling between their sources and the detector.

To elaborate, consider the Gedankenexperiment version of Fig. 1 described in Fig. 2 (a). Let us assume that a light source is co-located with the imaging sensor (such as a digital camera). The reflection from the semi-reflective sheet (such as a window pane) arrives at the sensor at time while the mannequin is only observable on or after . Here, is the speed of light. While we typically interpret images as a two dimensional, spatial representation of the scene, let us for now, consider the photograph in Fig. 1 along the time-dimension. For the pixel , this time-aware image is shown in Fig. 2 (b). Mathematically, our time-aware image can be written as a –sparse signal (and as a –sparse signal in general),

(1)

where are the constituent image intensities, are the reflection coefficients, is the 2D spatial coordinate, and is the Dirac distribution.

Fig. 1: Reflection from semi-reflective surfaces. The Memorial Church can be seen imprinted on the glass facade of the Harvard University Science Center.

A conventional imaging sensor produces images by marginalizing time information, resulting in the 2D photograph,

(2)

Recovering given , more generally, echoes of light,

(3)

is an ill-posed problem. Each year, a number of papers [3, 4, 5, 6, 7, 8] attempt to address this issue by using regularization and/or measurement-acquisition

diversity based on image statistics, polarization, shift, motion, color, or scene features. Unlike previous works, here, we ask the question: Can we directly estimate

in (1)? In practice, sampling (3) would require exorbitant sampling rates and this is certainly not an option. Also, we are interested in only, as opposed to [9] where is a non-linear argument in (3). As a result, our goal is to recover the intensities of a sparse signal. For this purpose, we explore the idea of sampling without time—a method for sampling a sparse signal which does not assume the knowledge of sampling instants or the sampling rate. In this context, our contributions are twofold:

  1. [leftmargin=40pt,label=]

  2. For the general case of -echoes of light, our work relies on estimating the constituent images from the filtered, auto-correlated, time-resolved measurements. This is the distinguishing feature of our approach and is fundamentally different from methods proposed in literature which are solely based on spatio-angular information (cf. [3, 4, 5, 6, 7, 8] and references therein).

  3. As will be apparent shortly, our work is intimately linked with the problem of (sparse) phase retrieval (cf. [10, 11, 12, 13, 14] and references therein). Our “sampling without time” architecture leads to an interesting measurement system which is based on time-of-flight imaging sensors [1] and suggests that measurements suffice for estimation of echoes of light. To the best of our knowledge, neither such a measurement device nor bounds have been studied in the context of image source separation [3, 4, 5, 6, 7].

Fig. 2: Time-aware imaging. (a) Exemplary setting for imaging through semi-reflective media. (b) Corresponding time-aware image at pixel .

For the sake of simplicity, we will drop the dependence of and on spatial coordinates . This is particularly well suited for our case because we do not rely on any cross-spatial information or priors. Also, scaling factors are assumed to be absorbed in .

Significance of Phase-free or Amplitude-only measurements: It is worth emphasizing that resolving spikes from a superposition of low-pass filtered echoes, is a problem that frequently occurs in many other disciplines. This is a prototype model used in the study of multi-layered or multi-path models. Some examples include seismic imaging [15], time-delay estimation [16], channel estimation [17], optical tomography [18], ultrasound imaging [19], computational imaging [1, 20, 21, 22, 23] and source localization [24]. Almost all of these variations rely on amplitude and phase information or amplitude and time-delay information. However, recording amplitude-only data can be advantageous due to several reasons. Consider a common-place example based on pulse-echo ranging. Let be the emitted pulse. Then, the backscattered signal reads . In this setting, on-chip estimation of phase () or time delay ()[22],

  1. [leftmargin=40pt,label=]

  2. can either be computationally expensive or challenging since is a non-linear parameter in (3), and hence, in .

  3. requires more measurements. For instance, measurements suffice for amplitude-only estimation () while amplitude-phase pair requires measurements [2]. This aspect of phase estimation is an important bottleneck as the the frame rate of an imaging device is inversely proportional to number time-domain measurements.

  4. is prone to errors. In many applications, multiple-frequency measurements, , are acquired [22, 25, 2] assuming that phase and frequency, are linearly proportional. However, this is not the case in practice. The usual practice is to oversample [26, 27].

In all such cases, one may adopt our methodology of working with intensity-only measurements.

Ii Image–Formation Model for ToF Sensors

Optical ToF sensors are active devices that capture 3D scene information. We adopt the generic image–formation model used in [1] which is common to all ToF modalities such as lidar/radar/sonar, optical coherence tomography, terahertz, ultrasound, and seismic imaging. In its full generality, and dropping dependency on the spatial coordinate for convenience, one can first formalize this ToF acquisition model as:

(4)

where,

  1. [leftmargin=40pt,label=]

  2. is a -periodic probing function which is used to illuminate the scene. This may be a sinusoidal waveform, or even a periodized spline, Gaussian, or Gabor pulse, for instance.

  3. is the scene response function (SRF). This may be a filter, a shift-invariant function

    , or a partial differential equation

    modeling some physical phenomenon [25].

  4. is the reflected signal resulting from the interaction between the probing function and the SRF.

  5. is the continuous-time signal resulting from the interaction between the reflected function and the instrument response function (IRF), or , which characterizes the electro-optical transfer function of the sensor.

  6. is a set of discrete measurements of the form .

  7. is the cyclic auto-correlation of , where and denote the convolution and time reversal operators, respectively.

The interplay between , , and results in variations on the theme of ToF imaging [1]. In this work, we will focus on an optical ToF setting. Accordingly:

  1. [leftmargin=20pt,label=]

  2. The probing function corresponds to a time-localized pulse.

  3. The SRF, accounting for the echoes of light, is a -sparse filter,

    (5)
  4. The IRF is fixed by design as . This implements the so-called homodyne, lock-in sensor [1].

Due to this specific shift-invariant structure of the SRF and IRF, we have, . Finally, the measurements read,

(6)

Iii Sampling Echoes of Light

Iii-a Bandlimited Approximation of Probing Function

Due to physical constraints inherent to all electro-optical systems, it is reasonable to approximate the probing signal as a bandlimited function [25, 9]. We use –term Fourier series with basis functions , , to approximate with,

(7)

where the are the Fourier coefficients. Note that there is no need to compute explicitly as we only require the knowledge of (cf. (6)). In Fig. 3, we plot the calibrated and its Fourier coefficients. In this case, ns. We also plot its approximation, with together with the measured and .

Iii-B Sampling Theory Context

The shift-invariant characterization of the ToF image–formation model allows to re-interpret the sampled version of (6) as the filtering of a sparse signal with a low-pass kernel (cf. [28, 9]).

(8)

Note that the properties of the auto-correlation operation imply that the sparsity of is , unlike that is -sparse and is completely specified by due to symmetry. Based on the approximation (7) and the properties of convolution and complex exponentials, and defining , we rewrite as,

(9)

Finally, the properties of the Fourier transform imply that sampling

the above signal at time instants , results in discrete measurements of the form , which corresponds to the available acquired data in our acquisition setting. Combining all the above definitions, it follows that:

  1. [leftmargin=40pt,label=—]

  2. is a vector of filtered measurements (

    cf. (8)).

  3. is a DFT matrix with elements .

  4. is a diagonal matrix with diagonal elements . These are the Fourier-series coefficients of .

  5. is a phase-less vector containing the Fourier coefficients of , which is obtained by sampling the Fourier transform of at instants . The signal directly depends on the quantities of interest to be retrieved and is expressed as

    (10)

    where and .

It is instructive to note that the relevant unknowns can be estimated from in (10) which in turn depend only on as opposed to sampling rate and sampling instants. This aptly justifies our philosophy of sampling without time.

Fig. 3: Bandlimited approximation with (red curves) and its cyclic auto-correlation (blue curve). Shown are the spatial-domain signals (top) and the corresponding Fourier-domain magnitudes in log scale (bottom).

Iv Reconstruction via Phase Retrieval

Given data model (9), we aim to recover the image parameters at each sensor pixel. For this purpose, we first estimate samples of as , where is the matrix pseudo-inverse of . This is akin to performing a weighted deconvolution, knowing that is a DFT matrix. Next, we solve the problem of estimating in two steps (1) First we estimate , and then, (2) based on the estimated values, we resolve ambiguities due to .

Parameter Identification via Spectral Estimation: Based on the coefficients , one can then retrieve the amplitude and frequency parameters that are associated with the oscillatory terms as well as the constant value in (10). The oscillatory-term and constant-term parameters correspond to and , respectively. All parameter values are retrievable from through spectral estimation [29]; details are provided in [2] for the interested reader.

Note that, given the form of (10) and our acquisition model, the sparsity level of the sequence —corresponding to the total amount of oscillatory and constant terms—is . The magnitude values and can thus be retrieved if at least autocorrelation measurements are performed [30, 2], which is the case in the experiments described in Section V.

Resolving Ambiguities: Based on the aforementioned retrieved parameters, one wishes to then deduce . The estimated cross terms allow to retrieve the values of through simple pointwise operations. Here we will focus on the case, . Due to space limitations, we refer the interested readers to our companion paper [2] for details on the general case (). The case when ,

may be a result of two distinct echoes (Fig. 2) or due to approximation of higher order echoes by two dominant echoes (due to inverse-square law). In this case, we effectively estimate two terms: and (10). The set of retrieved magnitude values then amount to solving[2]:

(11)

Thanks to the isoperimetric property for rectangles: , the r.h.s above is always positive unless there is an estimation error in which case, an exchange of variables leads to the unique estimates,

V Experimental Validation

V-a Simulation

Noting that measurements and are linked by an invertible, linear system of equations, knowing amounts to knowing . We have presented detailed comparison of simulation results in [2] for the case where one directly measures . In this paper, we will focus on practical setting where the starting point of our algorithm is (9).

Fig. 4: Experimental results on acquisition and reconstruction of echoes of light in a scene. (a) Single pixel, time-stamped samples serve as our ground truth because the phase information is known. We also plot the estimated, 2-sparse SRF (5) in red ink. (b) Fourier domain frequency samples of the SRF, with and ps. We plot the measured data in green and the fitted data in black. The frequencies are estimated using Cadzow’s method. (c) Same pixel, time-stampless, low-pass filtered, auto-correlated samples (8) together with the estimated, auto-correlated SRF in red ink. (d) Fourier domain samples (10) and its fitted version (black ink). (e) Ground truth images for the experiment. (f) Estimated images using temporal phase retrieval.

V-B Practical Validation

Our experimental setup mimics the setting of Fig. 2. A placard that reads “Time–of–Flight” is covered by a semi-transparent sheet, hence . The sampling rate is seconds using a our custom designed ToF imaging sensor [1]. Overall, the goal of this experiment is to recover the magnitudes given auto-correlated measurements . To be able to compare with a “ground truth”, we acquire time-domain measurements before autocorrelation, whose Fourier-domain phases are intact. In Fig. 4(a), we plot the non-autocorrelated measurements while phase-less measurements are shown in Fig. 4(c) from which we note that the samples are symmetric in time domain due to (cf. 6). In the first case (cf. Fig. 4(a)) where , we have [1],

Similar to in (9), in this case, the measurements read and we can estimate the complex-valued vector . The phase information in allows for the precise computation of [1, 9]. These “intermediate” measurements serve as our ground truth. The spikes corresponding to the SRF (5) are also marked in Fig. 4(a). Fourier-domain measurements linked with are shown in Fig. 4(b). Accordingly, where and . We estimate the frequency parameters using Cadzow’s method [31]. The resulting fit is plotted in Fig. 4(b).

In parallel to Figs. 4(a),(b), normalized, autocorrelated data are marked in Fig. 4(c). The signal is also shown as reference in green dashed line. We also plot the estimated (autocorrelated SRF). In Fig. 4(d), we plot measured and deconvolved vector from which we estimate (10). The result of fitting using Cadzow’s method with samples is shown in Fig. 4(d).

The reconstructed images, and , due to amplitude-phase measurements (our ground truth, Fig. 4(a)) are shown in Fig. 4(e). Alternatively, the reconstructed images with auto-correlated/intensity-only information are shown in 4

(f). One can observe the great similarity between the images obtained in both cases, where only a few outliers appear in the phase-less setting. The PSNR values between the maps reconstructed with and without the phase information are of

dB for and dB for . These numerical results indicate that, overall, the phase loss that occurs in our autocorrelated measurements still allows for accurate reconstruction of the field of view of the scene.

Finally, in order to determine the consistency of our reconstruction approach, the PSNR between the original available measurements and their re-synthesized versions—as obtained when reintroducing our reconstructed profiles into our forward model—are also provided. As shown in Figs. 4(a) and (c), the PSNR values in the oracle and phase-less settings correspond to dB and dB, respectively, which confirms that our reconstruction approach accurately takes the parameters and structure of the acquisition model into account.

Vi Conclusions

In this paper, we have introduced a method to satisfactorily recover the intensities of superimposed echoes of light, using a customized ToF imaging sensor for acquisition and temporal phase retrieval for reconstruction. Up to our knowledge, this is the first method that performs time-stampless sampling of a sparse ToF signal, in the sense that we only measure the amplitudes sampled at uniform instants without caring about the particular sampling times or sampling rates. This innovation can potentially lead to alternative hardware designs and mathematical simplicity as phases need not be estimated in hardware.

References

  • [1] A. Bhandari and R. Raskar, “Signal processing for time-of-flight imaging sensors,” IEEE Signal Processing Magazine, vol. 33, no. 4, pp. 45–58, Sep. 2016.
  • [2] A. Bhandari, A. Bourquard, S. Izadi, and R. Raskar, “Time-resolved image demixing,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and Sig. Proc. (ICASSP), Mar. 2016, pp. 4483–4487.
  • [3] N. Ohnishi, K. Kumaki, T. Yamamura, and T. Tanaka, “Separating real and virtual objects from their overlapping images,” in Lecture Notes in Computer Science.   Springer Science Business Media, 1996, pp. 636–646.
  • [4] A. Levin, A. Zomet, and Y. Weiss, “Separating reflections from a single image using local features,” in Proc. of IEEE Comp. Vis. and Patt. Recognition, vol. 1, 2004, pp. I–306.
  • [5] A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol., vol. 15, no. 1, pp. 84–91, 2005.
  • [6] N. Kong, Y.-W. Tai, and S. Y. Shin, “High-quality reflection separation using polarized images,” vol. 20, no. 12, pp. 3393–3405, 2011.
  • [7] Y. Li and M. S. Brown, “Single image layer separation using relative smoothness,” in Proc. of IEEE Comp. Vis. and Patt. Recognition, 2014, pp. 2752–2759.
  • [8] P. Chandramouli, M. Noroozi, and P. Favaro, “ConvNet-based depth estimation, reflection separation and deblurring of plenoptic images,” University of Bern, submitted, 2016.
  • [9] A. Bhandari, A. M. Wallace, and R. Raskar, “Super-resolved time-of-flight sensing via FRI sampling theory,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and Sig. Proc. (ICASSP), Mar. 2016, pp. 4009–4013.
  • [10] Y. C. Eldar, N. Hammen, and D. G. Mixon, “Recent advances in phase retrieval [Lecture Notes],” vol. 33, no. 5, pp. 158–162, Sep. 2016.
  • [11] H. Qiao and P. Pal, “Sparse phase retrieval with near minimal measurements: A structured sampling based approach,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and Sig. Proc. (ICASSP), Mar. 2016, pp. 4722–4726.
  • [12] B. Rajaei, E. W. Tramel, S. Gigan, F. Krzakala, and L. Daudet, “Intensity-only optical compressive imaging using a multiply scattering material and a double phase retrieval approach,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and Sig. Proc. (ICASSP), Mar. 2016.
  • [13] K. Huang, Y. C. Eldar, and N. D. Sidiropoulos, “Phase retrieval from 1D Fourier measurements: Convexity, uniqueness, and algorithms,” vol. 64, no. 23, pp. 6105–6117, Dec. 2016.
  • [14] Y. M. Lu and M. Vetterli, “Sparse spectral factorization: Unicity and reconstruction algorithms,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and Sig. Proc. (ICASSP), May 2011.
  • [15] S. Levy and P. K. Fullagar, “Reconstruction of a sparse spike train from a portion of its spectrum and application to high-resolution deconvolution,” Geophysics, vol. 46, no. 9, pp. 1235–1243, Sep. 1981.
  • [16] J. J. Fuchs, “Multipath time-delay detection and estimation,” vol. 47, no. 1, pp. 237–243, Jan. 1999.
  • [17] Y. Barbotin, A. Hormati, S. Rangan, and M. Vetterli, “Estimation of sparse MIMO channels with common support,” vol. 60, no. 12, pp. 3705–3716, Dec. 2012.
  • [18]

    C. S. Seelamantula and S. Mulleti, “Super-resolution reconstruction in frequency-domain optical-coherence tomography using the finite-rate-of-innovation principle,” vol. 62, no. 19, pp. 5020–5029, Oct. 2014.

  • [19] P. Boufounos, “Compressive sensing for over-the-air ultrasound,” in Proc. of IEEE Intl. Conf. on Acoustics, Speech and Sig. Proc. (ICASSP), May 2011.
  • [20] A. Velten, R. Raskar, D. Wu, B. Masia, A. Jarabo, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, and D. Gutierrez, “Imaging the propagation of light through scenes at picosecond resolution,” Commun. ACM, vol. 59, no. 9, pp. 79–86, Aug. 2016.
  • [21] D. Shin, F. Xu, F. N. C. Wong, J. H. Shapiro, and V. K. Goyal, “Computational multi-depth single-photon imaging,” Optics Express, vol. 24, no. 3, p. 1873, Jan. 2016.
  • [22] A. Bhandari, A. Kadambi, R. Whyte, C. Barsi, M. Feigin, A. Dorrington, and R. Raskar, “Resolving multipath interference in time-of-flight imaging via modulation frequency diversity and sparse regularization,” Optics Letters, vol. 39, no. 6, pp. 1705–1708, Mar. 2014.
  • [23] A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graphics, vol. 32, no. 6, pp. 1–10, Nov. 2013.
  • [24] J. J. Fuchs and H. Chuberre, “A deconvolution approach to source localization,” vol. 42, no. 6, pp. 1462–1470, Jun. 1994.
  • [25] A. Bhandari, C. Barsi, and R. Raskar, “Blind and reference-free fluorescence lifetime estimation via consumer time-of-flight sensors,” Optica, vol. 2, no. 11, pp. 965–973, Nov. 2015.
  • [26] K. Hibino, B. F. Oreb, D. I. Farrant, and K. G. Larkin, “Phase-shifting algorithms for nonlinear and spatially nonuniform phase shifts,” Jour.l of the Opt. Soc. of America A, vol. 14, no. 4, p. 918, Apr. 1997.
  • [27] P. Hariharan, B. F. Oreb, and T. Eiju, “Digital phase-shifting interferometry: a simple error-compensating phase calculation algorithm,” Appl. Opt., vol. 26, no. 13, p. 2504, Jul. 1987.
  • [28] H. Pan, T. Blu, and M. Vetterli, “Towards generalized FRI sampling with an application to source resolution in radioastronomy,” vol. 65, no. 4, pp. 821–835, Feb. 2017.
  • [29] P. Stoica and R. L. Moses, Introduction to spectral analysis.   Prentice Hall, Upper Saddle River, 1997, vol. 1.
  • [30] D. Potts and M. Tasche, “Parameter estimation for exponential sums by approximate prony method,” Signal Processing, vol. 90, no. 5, pp. 1631–1642, May 2010.
  • [31] J. A. Cadzow, “Signal enhancement–A composite property mapping algorithm,” vol. 36, no. 1, pp. 49–62, 1988.