Augmenting Light Field to model Wave Optics effects

07/09/2009
by   Se Baek Oh, et al.
MIT
0

The ray-based 4D light field representation cannot be directly used to analyze diffractive or phase--sensitive optical elements. In this paper, we exploit tools from wave optics and extend the light field representation via a novel "light field transform". We introduce a key modification to the ray--based model to support the transform. We insert a "virtual light source", with potentially negative valued radiance for certain emitted rays. We create a look-up table of light field transformers of canonical optical elements. The two key conclusions are that (i) in free space, the 4D light field completely represents wavefront propagation via rays with real (positive as well as negative) valued radiance and (ii) at occluders, a light field composed of light field transformers plus insertion of (ray--based) virtual light sources represents resultant phase and amplitude of wavefronts. For free--space propagation, we analyze different wavefronts and coherence possibilities. For occluders, we show that the light field transform is simply based on a convolution followed by a multiplication operation. This formulation brings powerful concepts from wave optics to computer vision and graphics. We show applications in cubic-phase plate imaging and holographic displays.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

01/28/2011

Ray-Based Reflectance Model for Diffraction

We present a novel method of simulating wave effects in graphics using r...
10/26/2016

Volumetric Light-field Encryption at the Microscopic Scale

We report a light-field based method that allows the optical encryption ...
05/18/2020

Learning to Model and Calibrate Optics via a Differentiable Wave Optics Simulator

We present a novel learning-based method to build a differentiable compu...
12/20/2017

Light Field Segmentation From Super-pixel Graph Representation

Efficient and accurate segmentation of light field is an important task ...
07/24/2021

Accelerating Atmospheric Turbulence Simulation via Learned Phase-to-Space Transform

Fast and accurate simulation of imaging through atmospheric turbulence i...
06/28/2021

Systematic evaluation of variability detection methods for eROSITA

The reliability of detecting source variability in sparsely and irregula...
08/31/2021

Toward AI-enhanced online-characterization and shaping of ultrashort X-ray free-electron laser pulses

X-ray free-electron lasers (XFELs) as the world`s most brilliant light s...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The light field (LF) is a four–variable parameterization of the plenoptic function [29, 23] describing the radiance of a ray propagating along and directions at and

. The ray–based 4D LF representation is based on simple 3D geometric principles and has led to a range of new algorithms and applications. They include digital refocusing, depth estimation, synthetic aperture, and glare reduction. However, the LF representation is inadequate to describe interactions with diffractive or phase–sensitive optical elements (

i.e., phase masks or holography). In such cases, Fourier optics principles are often used to represent wavefronts with additional phase information. It is known that the Wigner distribution function (WDF) is a counterpart of the LF in wave optics. The WDF describes the local spatial frequency spectrum of light, where the local spatial frequency corresponds to the ray angle .

Figure 1: (Top) Wavefront, WDF, and LF representation of a point source. (Middle) The WDF representation and (Bottom) The LF representation of wavefront at and after propagation in Young’s experiment. In the WDF representation, we see that an additional oscillatory term is introduced at the midpoint of the two point sources, which produces interference after propagation.

Figure 1 represents an important observation. In some cases, e.g., for a point object, the WDF and the LF exhibit identical properties. However, in other cases, e.g., in Young’s experiment, the WDF and LF differ at occluders as well as after finite propagation.

In this paper, we exploit relevant tools in wave optics and present an augmented LF framework to handle diffraction and phase–sensitive imaging. Figure 2 summarizes the key idea. We develop a simple toolbox to explain LF propagation through such materials. We introduce a notion of virtual light sources, for which the radiance of a ray is real (positive as well as negative) valued. We show that the augmented LF representation is sufficient to describe general wavefront propagation.

Figure 2: The augmented LF transformer concept. (Top) In free space, we make WDF and LF equivalent (via negative radiance values). (Bottom) At thin optical element, we show that a LF transformer performs multiplication in the spatial domain and convolution in the angular domain.

For simplicity, we explain the light propagation in flat land (i.e., in the plane of the paper). In the flat land the LF and WDF are 2D functions. The same analysis applies to the real 3D world, where the LF and WDF are 4D functions.

1.1 Contribution

For a more comprehensive LF analysis, we adapt useful tools from wave optics and present an augmented LF propagation framework. Specific technical contributions are as follows:

  • We derive new LF propagation (for free–space) & transformation (for occluders) equations that are as powerful as traditional wave–optics techniques.

  • We observe the following two facts: i) In free–space propagation, the WDF and LF are equivalent for any coherence state. ii) For interaction with thin elements in the optical path, transforms of incident LF plus virtual light sources produce an exact solution for the resultant LF.

  • We show applications of the formulation in devices for which LF analysis has not been used before, such as interferometers, phase plates, and holographic displays.

We hope to inspire researchers comfortable with ray–based analysis to start exploiting more complex optical elements. They can use well–understood LF concepts plus the new ray–based tools we have introduced, without worrying about complex Fourier–domain calculus common in wave–optics.

Figure 3: Comparison of the LF, ALF, and WDF. The LF formulation lacks phase properties and its radiance is always positive, whereas the ALF supports partially coherent light and diffractions by introducing the LF transformers and virtual light sources.

1.2 Scope and limitation

Since we intend to model diffraction and phase–related optical phenomena, coherence of light should be properly treated. Conceptually, coherence indicates the ability of making interference. Coherent light, e.g., a wavefront from a laser, has deterministic phase relations over all wavefronts, whereas incoherent light has completely random phase relations. Partially coherent state refers to any coherence state in between completely coherent and incoherent ones. The spatial frequency spectrum of incoherent light is extended to the evanescent cut–off in principle; the propagation direction is not well defined or equivalently incoherent light propagates along all directions. Hence, incoherent light may be regarded to be superposition of infinite numbers of plane waves with random phase delays. Although the WDF can be defined for partially coherent light, our formulations and representations mostly deal with coherent light, because our formulations for coherent light is easy to understand and can be extended into partially coherent/incoherent light in straightforward fashion. Hence, in this paper we briefly mention the significance of coherence state and explain the effect on the proposed formulations. More rigorous description of coherence can be found in [21].

Throughout this paper, we consider linearly polarized monochromatic light in the paraxial regime for simplicity. Introducing additional dimension for the wavelength easily extends our formulations into polychromatic light. The paraxial approximation, a common assumption in wave–optics theory as well as most practical cases, simplifies mathematical descriptions, especially the LF transformer in Sec. 2.3

. For more rigorous polarization analysis, matrix or tensor methods can be employed as in 

[2]. We also consider the free–space propagation in homogenous medium and neglect any non–linear optical effect.

As the virtual light sources and LF transformers are still based on ray representations, our model would have the same limitations as other ray–based models; if there is a caustic or singularity in systems, our model would not provide accurate results.

1.3 Related Work

Light fields and shield fields: Light fields were proposed by Levoy and Hanrahan [29] and Gortler et al[23] to characterize the propagation of rays. Several camera platforms have been developed for capturing light fields: Ives [26] and Lippman [30] used an array of pinholes. Wilburn et al. used camera arrays [46] and Georgiev et al[20] put both prisms and lenses in front of a camera. Adelson and Wang [1] and Ng et al[33] devised plenoptic cameras consisting of a single lens and a lenslet array. Instead of the lenslet array, a sinusoidal attenuating mask was used in the heterodyne camera [43]. Beside the light field capture systems, research about light transport, cast shadows, and light field in frequency domain has been studied by Chai et al[14], Isaksen et al[25], and Durand et al[18]. Shield fields were introduced to analyze attenuation of rays through occluders [28].

Wigner distribution function: The WDF was originally proposed by Wigner in quantum mechanics [45]. In optics, the WDF of light (electric–field) contains both the space and local spatial frequency information. The WDF has been exploited in analysis and design of various optical systems: 3D display [19], digital holography [31, 38], generalized sampling problems [38], and superresolution [49]. The WDF can also be defined for partially coherent light [6] and thin optical elements such as a lens, phase mask, aperture, or grating [5, 8, 3]

. The ambiguity function, the Fourier transform pair of the WDF, corresponds to the LF in the frequency domain and has been used in understanding wavefront coding systems 

[13, 17]. More details of the WDF can be found in Ref. [7].

In the optics community, many researchers have tried to connect radiometry and wave optics. One notable concept is the generalized radiance suggested by Walther [44]. Wolf investigated extensively and summarized its physical meaning and limitation in [47]. In computer vision and graphics communities, Zhang and Levoy recently reviewed this connection [53]. After the WDF was revealed as one kind of phase–space distribution functions, many different phase–space distribution functions have been proposed; angle–impact Wigner function [48] and Choi–Williams distribution function [16] can potentially be employed in developing new systems and algorithms in computer vision and graphics.

Ray based model for diffraction: many different theories have been proposed to model diffraction in the context of ray–optics. The geometrical theory of diffraction (GTD) is widely known [27]. To model diffraction at edge, the GTD exploits various laws of diffraction and computes diffraction coefficients. Since the augmented LF is utilized the WDF based on wave–optics, diffraction is automatically taken into account. More importantly, the augmented LF is implemented in the LF framework. Hence, the augmented LF is much more convenient than the GTD and provides greater versatility to researchers in computer graphics and vision communities.

2 Augmented light field propagation framework

The LF is the radiance of a ray parameterized with a position and angle . In wave optics, light is often described by the amplitude and phase of electric fields. The wavefront is defined as a surface of a constant phase in the electric fields. The wavefront and its WDF are shown in Fig. 4. Rays are always normal to the wavefront and the phase of the wavefront is encoded in the local spatial frequency . The propagation angle of a ray and the local spatial frequency is related by  [22], where denotes the wavelength. Hence, in free–space without any light interference, the LF representation is complete and contains the phase information of the wavefront.

Figure 4: Visualization of a wavefront and its Wigner distribution function. Rays are normal to the wavefront and the phase of the wavefront is equivalent to the local spatial frequency in the Wigner representation.

The Wigner distribution function for an input is defined as

(1)

where can be either electric fields or transmittance of an optical element. Projecting the WDF along the –axis yields the intensity just as in the LF.

Figure 5: Comparisons of both WDF and LF for few simple lights. The LF and the WDF exhibit identical fashions.

Figure 5 shows wavefronts, the WDF, and the LF for a point source, plane wave, spherical wave, and incoherent light. The WDF and LF exhibit identical representations for these lights. Note that top three shown in Fig. 5 are coherent lights and the fourth one represents incoherent light of lateral radiance variation.

2.1 Limits of Light Field Analysis

The LF is limited for elements showing diffraction or phase sensitive behavior (i.e., phase gratings or holography). To exemplify, Young’s experiment (two pinholes illuminated by a laser) is analyzed by both the WDF and LF as shown in Fig. 6.

Figure 6: The WDF and LF representations of Young’s experiment, where the third light in the WDF produces interference. If the light is incoherent, even in the WDF, the third light diminishes and both representations predict no interference. (Color code in the WDF; red: positive, blue: negative)

In the WDF representations, the two point sources (from the pinholes) produce three components: two at the two pinholes’ locations and the other at the middle of the two pinholes. The last component is called an interference term and obtained by mathematical manipulation of the WDF [11]. For infinitesimally small pinholes, the transmittance of the two pinholes is given by

(2)

where and denote the locations of the pinholes. Then, is expanded as

(3)

where two new variables are defined as and . Taking the Fourier transform with respect to of eq. (3) computes the WDF of the two pinholes as

(4)

As shown in eq. (4), the last cosine term oscillates between positive and negative values along the –axis, thus it does not contribute to the intensity. In the LF description, only two light fields exist. As the light propagates, both the WDF and LF are sheared along the –direction. The intensity of the fringe is computed by projection along either or ; no interference is produced in the LF.

If the light is incoherent, then both the WDF and LF representations do not have the interference term. As described earlier, the incoherent light is considered to be an infinite number of plane waves propagating along all directions with random phase delays. If the two pinholes are probed by the incoherent light, then the three components of eq. (4) are replicated an infinite number of times with all possible offsets in the –axis; the interference terms are averaged out. This will be clearly understood with the incoherent light and the LF transformer of the two pinholes described in Sec. 2.3.

2.2 Virtual light sources

To rigorously use the LF description for diffraction and interference, the interference term should be included. Here, we expand the LF framework by introducing the concept of virtual light sources, which may have negative radiance. For the case of the two–pinholes at and as shown in Fig. 7, the location of the virtual light source is at and its radiance is along the –axis. Since the intensity is obtained by integrating the augmented LF along the –axis, the virtual light sources do not affect the intensity at the pinholes plane, which agrees with physical observation and intuition. Once the virtual sources are included in the LF, then the LF propagation still can be used and interference can be properly modeled by the augmented LF.

Figure 7: Concept of virtual light sources for coherent light. In the LF representation, no interference is predicted. By including the virtual light sources, the LF propagation still can be used.

Note that, as the definition eq. (1) implies, computing the WDF of an optical element is indeed locating the virtual light sources for all the possible combinations of two points on the element.

2.3 Light field transformer

Next we model the LF propagation through an optical element with a LF transformer.

Figure 8: Angle shift invariance in a thin transparency. In (a) and (b), the output rays rotate in the same fashion as the input ray rotates, which allows that one argument is sufficient in the LF transformer.

As shown in Fig. 8, in the LF transformer model, a thin optical element is probed by an input LF , and an output LF is generated. In the most generalized situation, the relation of the input and output LFs is constructed as

(5)

where denotes the LF transformer of the optical element. Equation (5) indicates that the optical element introduces a 4D transform (8D in the real world) from to space.

For a thin optical element, . Then, eq. (5) becomes

(6)

where a 3D transform (6D in the real world) is involved. In most thin optical elements, they exhibit angle shift invariance in the paraxial region; e.g. let us consider that an optical element produces three rays from an incoming ray of an incident angle at (Fig. 8(a)). As the input ray rotates, the three output rays also rotate in the same manner as shown in Fig. 8(b). Hence, the LF transformer is sufficiently described by only one argument and eq. (6) is further simplified as

(7)

The LF transformer is indeed a 2D transform in the flat land (4D in the real world) as shown in Fig. 8(c). Equation (7) is particularly interesting because it involves a multiplication along the –axis but a convolution along the –axis. Note that the LF transformer can be computed by the WDF and we present the LF transformer of canonical elements in Sec. 4.1 and Sec. 4.2.

When a ray passes through an optical element, if it does not bend and only the radiance is attenuated, and then eq. (7) becomes even simpler as

(8)

where is the shield field, describing attenuation by occluders [28].

Claim 1: At a thin interface, is expressed by a special operation on and a 4D LF transformer , in which the operation is a convolution along the –axis and a multiplication along the –axis.

3 Propagation in Free Space

In wave optics, the free–space propagation is described by the Fresnel diffraction [22]. Applying the WDF to the Fresnel diffraction formula, we obtain the free–space propagation relation in the WDF framework, which is the –shear transform [7] just as the LF propagation.

Claim 2: For the free–space propagation, the WDF and LF exhibit the identical –shear transform.

In the case of the far–zone diffraction (Fraunhofer diffraction), the free–space propagation becomes the Fourier transform, in which the LF rotates . The fractional Fourier transform [32, 34] describes any propagation in between the far and near–zone more rigorously, where the WDF and LF both rotate by the amount of the propagation. For all coherence states, the free–space propagation is identically illustrated as the –shear transform.

Figure 9: Two more representations of free–space light propagation

4 Propagation via masks

For a thin optical element of an amplitude mask and a phase mask , the LF transformer is computed by the WDF; applying eq. (1) to . For convenience, we pre–computed the LF transformers of commonly used amplitude and phase masks in following sections. As described in Sec. 2.3, an incident LF interacts with a LF transformer, presented in Table 1 and 2, and we can predict an outgoing LF by eq. (7). The virtual light sources are automatically introduced in this transform. For example, for two pinholes, the third term oscillating at is the virtual light source.

4.1 Propagation via Amplitude Masks

Figure 10: Comparison of the WDF and LF of lights for various amplitude masks. The WDF columns represent both the LF transformers of the masks and the output LF with virtual light sources.

Figure 10 shows some of commonly used amplitude masks, where the WDF column represents the WDF (or LF transformer) of the elements and the LF column represents the traditional LFs. Interestingly, the LF of a thin optical element is identical to the output LF when the optical element is probed by a plane wave propagating normal to the –axis, because the LF of the plane wave is ; a convolution along the –axis produces the LF transformer itself.

amplitude mask one pinhole two pinholes rectangular aperture111: triangle function defined in [22]; if , , otherwise . amplitude grating coded aperture
Table 1: LF transformer of amplitude masks

4.2 Propagation via Phase Masks

We also compute the LF transformers of various phase masks and show in Fig. 11.

Figure 11: Comparison of the WDF and LF of lights for various phase masks. The light field transformer can be easily computed from the transparency function .

For optical elements with slowly varying phase variations whose complex transmittance is defined as , the LF transformer is given by [9]

(9)

This explains why the LF transformer of a cubic phase mask is a quadratic curve in sec. 4.2.

phase mask linear phase (prism) quadratic phase (lens) cubic phase phase grating222: Bessel function of the first kind, order phase plate333assuming slowly varying phase
Table 2: LF transformer of phase masks

5 Results

We demonstrate how to use the LF transformer and the augmented LF propagation for three specific systems.

5.1 Airy pattern in a single lens imager

For a single lens imager shown in Fig. 12, an ideal point spread function is an infinitesimally small point if diffraction is ignored. However, it is well known that the point spread function is indeed the Airy disk due to diffraction by the aperture. Here we explain how the augmented LF allows us modeling diffraction.

A lens, focal length and aperture size , can be decomposed of a pure phase mask of quadratic phase variation (i.e., quadratic change in optical path length as a function of ) and an amplitude mask of a rectangular aperture as shown in Fig. 12(a). Figure 12(b) shows how the augmented LF changes throughout the system. The LF of a point source at is at the object plane and is sheared along the –axis by the propagation to the lens. By the LF transformer shown in Fig. 12(a), the augmented LF transmitted the lens is a tilted blurb with some negative radiance values; the quadratic phase of the lens induces the tilt, and the finite aperture produces radiance variations. Then, the augmented LF is sheared again along the –axis by the second propagation to the image plane. Integrating the augmented LF along the –axis, we obtain the intensity of the point spread function, which is the Airy pattern in the flat land.

Figure 12: Point spread function (Airy pattern) of a single lens imager. (a) LF transformer of the phase and amplitude components of a lens with focal length , (b) LF shape as the propagation through the system. Note that due to the finite size aperture, the PSF is the airy pattern.

5.2 Wavefront coding system (Cubic phase mask)

We apply the same procedure to a wavefront coding system: specifically a cubic phase mask imager for extending the depth of field [13, 17]. With the cubic phase mask, rays experience different focal lengths depending on their positions at the mask, and they bend differently compared to those in the single lens imager. The cubic phase mask reshapes the point spread function invariant to defocus; thus deconvolution in post–processing is much robust and the depth of field in processed images is extended.

As shown in Fig. 13(a), the cubic phase mask imager has three LF transformers: a lens, rectangular aperture, and cubic phase mask. The LF transformer of the cubic phase mask is a quadratic curve. Figure 13(b) shows the system geometry and the behavior of the LF through the system.

Figure 13: Point spread function of the cubic phase mask imager. (a) LF transformer of a lens and cubic phase mask, (b) LF shape as the propagation through the system. Note that the PSF is distorted but invariant to defocus.

Again we start with a point object and the LF is sheared along the –axis by the propagation to the lens. By the lens and the cubic phase mask, the transformed LF becomes a curved blurb. After the second propagation, the augmented LF is transported at the image plane. Since the LF is not parallel to the –axis, the point spread function is distorted and has asymmetric tails.

If an object is defocused, then the LF of the object at the focus plane is not parallel to the –axis and is rather tilted. This sheared input LF also causes the sheared output LF at the image plane in the –direction; however, the intensity does not change significantly because the contributions from the upper and lower parts of the curved LF are compensated with each other. In the original derivations of the extended depth of field by using the cubic phase mask [17, 13], the ambiguity function was exploited to represent the OTF for various defocus simultaneously. Finally we want to mention that various phase masks have been analyzed for the extended depth of field [12, 35, 10, 36, 50, 52, 51, 4].

5.3 Hologram Transform of Light Field

In this section, given the LF transformer of a hologram, we show how to predict an output image. For simplicity, we choose a point source as an object. Thus, a spherical wave from the point source is an object wave, and a plane wave propagating parallel to the optical axis is a reference wave as shown in Fig. 14(a).

Figure 14: Light field–based reconstruction from hologram of a single Lambertian scene point. (Left in (a)) Recording geometry, and (Right in (a)) the LF transformer of the recorded hologram. (Top in (b)) Reconstructing hologram with the reference wave. Two diffracted lights are produced at and . (Bottom left in (b)) the LF at , (Bottom right in (b)) the LF at .

To obtain the LF transformer of the hologram, we first compute the transmittance of the hologram, which is proportional to the intensity of the interference between the object and reference waves. From the geometry, the electric field of the reference and object waves in the paraxial region are given by

(10)
(11)

The intensity of the interference is proportional to

(12)

Here we ignored DC terms, and , because they are uniform and do not contain high frequency signals. By substituting eqs. (10) and (11) with eq. (12) and using (hologram is at ), the transmittance of the hologram is proportional to

(13)

Now we compute the WDF of eq. (13) as

(14)

where we have not shown the last cosine term in Fig. 14 because it is a higher order oscillation term spread over the entire space. Finally, the LF transformer of the hologram is

(15)

In the reconstruction, the replica of the reference wave probes the hologram. Since the LF of the reference wave is , the LF of the reconstructed wave is shown in the bottom left of Fig. 14(b). Behind the hologram, at , two waves are produced: 1) a converging spherical wave, which is focused at , and 2) a diverging spherical wave, which looks like originating from a point object at . If an observer looks at the hologram from the other side of the original point object from any distance , the diverging spherical wave produces a virtual image of the original point object. If a screen is placed at , one will also observe a real image. Note that the two components in the LF transformer of the hologram correspond to the virtual and real images as shown in the right panel of Fig. 14(a).

Here we simplified the holography extremely. In practice, since coherent light is used in both recording and reconstruction, the high–order oscillation terms, often called cross–terms or interference terms in the optics community, should be considered as well. More rigorous analysis of holography with the WDF can be found in [37, 42].

6 Conclusion

Geometrical optics, commonly used in computer vision, and wave optics, used in diffraction and interference analysis, employ different formulations. Yet they describe the same propagation of light in different contexts. There are many interesting complementary concepts in these two areas; e.g., the Shack–Hartmann sensor for sensing aberrations in a wavefront, is an identical concept and uses the same optics as the plenoptic cameras detect radiance along different angles. The similarity of the LF and the WDF could be one of the most fascinating connections between computer vision/graphics and optics communities.

Whereas the LF describes many optical phenomena well, it is limited to incoherent light and inadequate to describe diffraction and phase-sensitive optical elements. In this paper, we employed wave optics to broaden the scope of the LF and the augmented LF can model diffraction and interference. For the free-space propagation, the LF and the WDF exhibit the same –shear transform. To account for diffraction due to either amplitude or phase variations, we introduced the concept of the virtual light sources with varying, possibly negative, radiance along different angles. Once the virtual light sources are included properly in the augmented LF framework, one can continue to use the LF propagation in any optical systems. We also introduced the concept of the LF transformer, which describes the relation between the input and output LF for optical elements. We assumed a thin transparency and angle shift–invariance, which is true for most conventional optical elements, the LF transformer is represented by a 2D lookup table in the flat land. We pre–computed the 2D lookup tables for canonical optical elements such as an aperture, a lens, gratings, and various phase masks. Since these results already include the virtual sources, we are able to extend the LF framework under diffraction and phase–sensitive elements. The benefit is that optical systems previously considered beyond the analysis abilities of the LF, such as phase masks and holograms, can now be used in computer vision and graphics applications with simple modifications.

Although our examples are described with coherent light, incoherent or partially coherent light are much more interest; because the WDF is more powerful for incoherent and partially coherent than for coherent light [15, 41]. Moreover, most applications in computer vision and graphics deal with partially coherent and incoherent light. As we described earlier, the augmented LF can be extended into partially coherent as well as incoherent light.

There are several future directions of exploration. One obvious application is rendering. Since the proposed formulation can incorporate geometrical characteristics with wave property of light, it would provide more realistic rendering results for surfaces involving diffraction and interference [39, 24, 40]. Compared to wave optics, geometric optics assumes infinitely small wavelengths. We can create a continuum of solutions for the LF when this assumption is not valid. One can derive the LF transformer equation from the WDF by assuming infinitely small wavelengths. In the rectangular aperture shown in Fig. 10, as the wavelength decreases, the WDF of the aperture is squeezed down along the –axis and it eventually becomes . We would also like to explore the LF transformers beyond thin transparencies or occluders with angle shift invariance. Other manipulations will include volumetric objects with 6D or 8D transforms that will result in 3D or 4D lookup tables for optical elements.

Finally our goal is not only to introduce WDF to the computer vision and graphics communities but also to support more rigorous analysis of augmented light field that could lead to a new class of applications. We hope this work will inspire researchers in optics as well as in computer vision/graphics to develop new tools and algorithms based on joint exploration of geometric and wave optics concepts; e.g., utilizing useful concepts and techniques from wave optics brought more realistic results and different data representation to computer graphics applications [54, 55].

Acknowledgement

We would like to thank Markus Testorf for very useful comments. We encourage readers to look at two special literatures edited by him which inspired us: “Selected papers on phase–space optics” (SPIE’s milestone series in optical science & engineering) and “Phase–space representations in optics” (Special issue of Applied Optics in 2008).

References

  • [1] E. H. Adelson and J. Y. A. Wang. Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14:99–106, 1992.
  • [2] M. A. Alonso. Wigner functions for nonparaxial, arbitrarily polarized electromagnetic wave fields in free space. J. Opt. Soc. Am. A, 21(11):2233–2243, 2004.
  • [3] V. Arrizón and J. Ojeda-castañeda. Irradiance at Fresnel Planes of a Phase Grating. Journal of the Optical Society of America A, 9(10):1801–1806, 1992.
  • [4] S. Bagheri, P. E. X. Silveira, R. Narayanswamy, and D. P. de Farias. Analytical optial solution of the extension of the depth of field using cubic–phase wavefront coding. part ii. J. Opt. Soc. Am. A, 25(5):104–1074, 2008.
  • [5] M. J. Bastiaans. Wigner Distribution Function and Its Application to 1st-order Optics. Journal of the Optical Society of America, 69(12):1710–1716, 1979.
  • [6] M. J. Bastiaans. Application of the Wigner Distribution Function to Partially Coherent-Light. Journal of the Optical Society of America A, 3(8):1227–1238, 1986.
  • [7] M. J. Bastiaans. Application of the Wigner distribution function in optics. In W. M. F. Hlawatsch, editor, The Wigner Distribution - Theory and Applications in Signal Processing, pages 375–426. Elsevier Science, Amsterdam, 1997.
  • [8] M. J. Bastiaans and P. G. J. van de Mortel. Wigner distribution function of a circular aperture. Journal of the Optical Society of America A, 13(8):1698–1703, 1996.
  • [9] K. H. Brenner and J. Ojeda-castañeda. Ambiguity Function and Wigner Distribution Function Applied to Partially Coherent Imagery. Optica Acta, 31(2):213–223, 1984.
  • [10] N. Caron and Y. Sheng. Polynomial phase masks for extending the depth of field of a microscope. Applied Optics, 47(22):E39–E43, 2008.
  • [11] R. Castañeda. Phase space representation of spatially partially coherent imaging. Applied Optics, 47(22):E53–E62, August 2008.
  • [12] A. Castro and J. Ojeda-Castañeda. Asymmetric phase masks for extended depth of field. Applied Optics, 43(17):3474–3479, 2004.
  • [13] W. T. Cathey and E. R. Dowski. New paradigm for imaging systems. Applied Optics, 41(29):6080–6092, 2002.
  • [14] J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong. Plenoptic sampling. In SIGGRAPH, pages 307–318, 2000.
  • [15] S. Cho, J. C. Petruccelli, and M. A. Alonso. Diffraction effects in wigner functions for paraxial and nonparxial fields. In Frontiers in Optics (FiO), volume OSA Technical Digest (CD), page FMK8. Optical Society of America, 2008.
  • [16] H.-I. Choi and W. J. Williams. Improved time–frequency representation of multicomponent signals using exponential kernels. In IEEE Transactions on Acoustics, Speech, and Signal Processing, volume 37, pages 862–871, 1989.
  • [17] E. R. Dowski and W. T. Cathey. Extended Depth of Field through Wave-Front Coding. Applied Optics, 34(11):1859–1866, 1995.
  • [18] F. Durand, N. Holzschuch, C. Soler, E. Chan, and F. X. Sillion. A frequency analysis of light transport. ACM Trans. Graph., 24(3):1115–1126, 2005.
  • [19] W. D. Furlan, M. Martínez-Corral, B. Javidi, and G. Saavedra. Analysis of 3-D Integral Imaging Displays Using the Wigner Distribution. Journal of Display Technology, 2(2):180–185, 2006.
  • [20] T. Georgiev, C. Zheng, S. Nayar, B. Curless, D. Salasin, and C. Intwala. Spatio-angular resolution trade-offs in integral photography. In EGSR, pages 263–272, 2006.
  • [21] J. W. Goodman. Statistical Optics. John Wiley and Sons, Inc., 2000.
  • [22] J. W. Goodman. Introduction to Fourier optics. Roberts & Co., Englewood, Colo., 3rd edition, 2005.
  • [23] S. Gortler, R. Grzeszczuk, R. Szeliski, and M. Cohen. The lumigraph. In SIGGRAPH, pages 43–54, 1996.
  • [24] H. Hirayama, Y. Yamaji, K. Kaneda, H. Yamashita, and Y. Monden. Rendering iridescent colors appearing on natural objects. In Computer Graphics and Applications, 2000. Proceedings. The Eighth Pacific Conference on, pages 15–433, 2000.
  • [25] A. Isaksen, L. McMillan, and S. Gortler. Dynamically reparameterized light fields. In SIGGRAPH, pages 297–306, 2000.
  • [26] H. Ives. Camera for making parallax panoramagrams. J. Opt. Soc. Amer., 17:435–439, 1928.
  • [27] J. B. Keller. Geometrical theory of diffraction. J. Opt. Soc. Am., 52(2):116–130, 1962.
  • [28] D. Lanman, R. Raskar, A. Agrawal, and G. Taubin. Shield Fields: Modeling and Capturing 3D Occluders. In SIGGRAPH ASIA’08, (Singapore), 2008.
  • [29] M. Levoy and P. Hanrahan. Light field rendering. In SIGGRAPH 96, pages 31–42, 1996.
  • [30] G. Lippmann. Epreuves reversible donnant la sensation du relief. J. Phys, 7:821–825, 1908.
  • [31] J. Maycock, C. P. McElhinney, B. M. Hennelly, T. J. Naughton, J. B. McDonald, and B. Javidi. Reconstruction of partially occluded objects encoded in three-dimensional scenes by using digital holograms. Applied Optics, 45(13):2975–2985, 2006.
  • [32] D. Mendlovic and H. M. Ozaktas. Fractional Fourier-Transforms and Their Optical Implementation .1. Journal of the Optical Society of America A, 10(9):1875–1881, 1993.
  • [33] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan. Light Field Photography with a Hand-held Plenoptic Camera. Technical report, Standford University, 2005.
  • [34] H. M. Ozaktas and D. Mendlovic. Fractional Fourier-Transforms and Their Optical Implementation .2. Journal of the Optical Society of America A, 10(12):2522–2531, 1993.
  • [35] A. Sauceda and J. Ojeda-Castañeda. High focal depth with fractional–power wave fronts. Optics Letters, 29(6):560–562, 2004.
  • [36] S. S. Sherif, W. T. Cathey, and E. R. Dowski. Phase plate to extended the depth of field of incoherent hybrid imaging systems. Applied Optics, 43(13):2709–2721, 2004.
  • [37] G. Situ and J. T. Sheridan. Holography: an interpretation from the phase–sapce point of view. Optics Letters, 32(24):3492–3494, 2007.
  • [38] A. Stern and B. Javidi. Sampling in the light of Wigner distribution. Journal of the Optical Society of America A, 21(3):360–366, 2004.
  • [39] Y. Sun. Rendering biological iridescences with rgb–based renderers. ACM Trans. Graph., 25(1):100–129, 2006.
  • [40] Y. Sun, F. D. Fracchia, M. S. Drew, and T. W. Calvert. Rendering iridescent colors of optical disks. In 11th EUROGRAPHICS Workshop on Rendering (EGRW), pages 341–352. Eurographics/ACM, 2000.
  • [41] M. Testorf. Analyse des Ionenaustauschprozesses in Glas im Hinblick auf eine Synthese refraktiver optischer Mikroelemente. PhD thesis, University of Erlangen, Germany, 1994.
  • [42] M. Testorf and A. W. Lohmann. Holography in phase space. Applied Optics, 47(4):A70–A77, 2008.
  • [43] A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph., 26(3):69:1–69:12, July 2007.
  • [44] A. Walther. Radiometry and Coherence. Journal of the Optical Society of America, 63(12):1622–1623, 1973.
  • [45] E. Wigner. On the quantum correction for thermodynamic equilibrium. Physics Review, 40:749–759, 1932.
  • [46] B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy. High performance imaging using large camera arrays. ACM Trans. Graph., 24(3):765–776, 2005.
  • [47] E. Wolf. Coherence and Radiometry. Journal of the Optical Society of America, 68(1):6–17, 1978.
  • [48] K. B. Wolf, M. A. Alonso, and G. W. Forbes. Wigner functions for helmholtz wave fields. J. Opt. Soc. Am. A, 16(10):2476–2487, 1999.
  • [49] K. B. Wolf, D. Mendlovic, and Z. Zalevsky. Generalized Wigner function for the analysis of superresolution systems. Applied Optics, 37(20):4374–4379, 1998.
  • [50] Q. Yang, L. Liu, and J. Sun. Optimized phase pupil masks for extended depth of field. Optics Communications, 272(1):56–66, April 2007.
  • [51] Q. G. Yang, L. R. Liu, J. F. Sun, Y. J. Zhu, and W. Lu. Analysis of optical systems with extended depth of field using the Wigner distribution function. Applied Optics, 45(34):8586–8595, 2006.
  • [52] D. Zalvidea and E. E. Sicre. Phase pupil functions for focal–depth enhancement deroved from a wigner distribution function. Applied Optics, 37(17):3623–3627, 1998.
  • [53] Z. Zhang and M. Levoy. Wigner distributins and how they relate to the light field. In IEEE International Conference on Computational Photography, 2009.
  • [54] R. Ziegler, S. Bucheli, L. Ahrenberg, M. Magnor, and M. Gross. A bidirectional light field – hologram transform. Computer Graphics Forum, 26(3):435–446, 2007.
  • [55] R. Ziegler, S. Croci, and M. Gross. Lighting and occlusion in a wave–based framework. Computer Graphics Forum, 27(2):211–228, 2008.