I Introduction
Light Fields are commonly represented as 4 dimensional functions with 2 spatial and 2 angular dimensions [1, 2]. They can be seen as 2D arrays of images (called subaperture images), each having an unlimited depth of field, and differing from their neighbour images only by a slight shift of the view angle. The sampling in the angular dimensions is key in Light Field imaging [3]. In particular, densely sampled Light Fields make it possible to directly render images with shallow depth of field while controlling the focus depth. Such rendering, often referred to as Light Field refocusing, does not require knowledge of the scene’s geometry. It is usually performed either by shifting and averaging the subaperture images [4] or by selecting a 2D slice in the 4D Fourier domain [5]. However, a dense angular sampling comes at the expense of very high requirements in terms of capture, storage, processing power and memory. A too sparse angular sampling, on the other hand, does not allow for a smooth transition between viewpoints and causes angular aliasing in the refocused images, characterized by sharp structures in the out of focus regions. The importance of a dense angular sampling is clearly shown by the vast literature on viewpoint interpolation. Several approaches exist including depth image based rendering techniques [6, 7, 8, 9, 10, 11]
, deep learning methods either exploiting a depth map estimation
[12, 13] or not [14, 15], and approaches leveraging sparsity priors of the Light Field data in a transformed domain [16, 17, 18, 19]. However, although viewpoint interpolation greatly simplifies the capture of dense Light Fields, it also increases the amount of data to store and process for the final rendering application.Alternatively, Light Fields can be represented as a focal stack, that is, a set of shallow depth of field images (e.g. photos taken with a wide aperture) with different focusing depths. This representation has the advantage of allowing an unlimited angular density with few images because the sampling is performed on the depth dimension instead of 2 angular dimensions. However, for rendering tasks such as simulating a different camera aperture size or shape, or a change of viewpoint, the common approach is to first convert the focal stack into the 4D representation. For instance, Levin and Durand [20] retrieve subaperture images by the deconvolution of shifted and averaged focal stack images. A similar deconvolution technique is used in [21] to first synthesize the 4D Light Field from a focal stack in order to render images with arbitrary aperture shapes. More recent methods have also been proposed to reconstruct the 4D Light Field from a focal stack either in the spatial domain using depth from focus [22], or via optimization in the Fourier domain [23, 24].
In this paper, we propose the Fourier Disparity Layer (FDL) representation illustrated in Fig. 1. It can be easily constructed either from a 4D Light Field, a focal stack, or even a hybrid Light Field combining subaperture and wide aperture images with varying parameters (i.e. focusing depth, aperture, point of view). Given the 2D Fourier transform of each input image with their parameters (i.e. angular coordinates, aperture, focus), the FDL model is constructed using linear optimization. For each frequency component, a linear least squares problem is solved to determine the corresponding Fourier coefficients of the different layers, each layer being associated to a given disparity value. The layers can then be directly used for real time rendering. For instance, subaperture images are obtained by shifting the layers proportionally to their associated disparity value and by averaging them. This is directly implemented in the Fourier domain as a simple linear combination with frequencydependent coefficients. More general rendering with arbitrary point of view, aperture shape and size, and focusing depth is performed with the same computational complexity without the need to first reconstruct the 4D Light Field.
In the case where the input is a set of subaperture images, we propose a gradient descent based calibration method to determine their angular coordinates as well as the optimal set of disparity values. The formulation of the optimization problems for the calibration and the layer construction are closely related. Nevertheless, we define two regularization schemes with different properties to better suit each situation.
Additionally, we demonstrate the effectiveness of our approach for several direct applications. First, when the input is a sparse set of subaperture images, view interpolation and extrapolation is obtained by constructing the FDL representation and by rendering views at intermediate angular coordinates with an infinitely small aperture. In a second application, the same viewpoints as the input are rendered to produce a denoised result. For this use case, we present a possible extension of the model where the shift applied to each layer is not constrained to be proportional to the angular coordinates of the view to reconstruct. The relaxed model allows a more accurate representation of occlusions and nonLambertian effects in the scene.
Since the computational complexity is a key aspect motivating the need for a new Light Field representation, our implementation makes efficient use of the GPU at every step of the processing chain (i.e. calibration, layer construction, rendering). The proposed algorithms are built upon simple linear algebra operations performed independently at each frequency component, which makes our approach particularly suitable for GPU parallelization.
In summary, the contributions are:

Definition of the Fourier Disparity Layer representation and its construction from other Light Field representations (e.g. subaperture images, focal stack, combination of focal stack images and subaperture images).

Calibration method jointly determining the input view positions and disparity values of the layers.

Fast and advanced Light Field rendering from the FDL representation with simultaneous control over the viewpoint, aperture size, aperture shape and focusing depth.

Analysis of other application scenarios: view interpolation and denoising.
Ii Related work
Related Light Field representations have been used in the design of several Light Field displays [25, 26, 27]. These displays reproduce the Light Field using a stack of light attenuating LCD layers placed in front of a backlight. Thanks to the distance separating the LCD panels in the display, the image perceived depends on the observer’s position and is proportional to the product of the layers. This layer representation has similarities with the one presented in this paper, and it can be constructed either from the Light Field views [25, 26] or from a focal stack [27]. The main difference however, is that, because of the physics of the light attenuating LCD layers, the subaperture images of the Light Field are reconstructed as a product of the layers’ pixels instead of a sum. Hardware limitations also impose constraints on the layer representation. For instance, the number of layers is generally small (e.g. 3 to 5), which is often insufficient to accurately represent the whole Light Field. Furthermore, the layers must only have positive pixel values in order to be displayed on the LCD panels. This constraint is not required in our model, which allows us to efficiently construct the layers in the Fourier domain.
Similarly to the FDL method proposed in this paper, Alonso et al. [24] construct layers by an optimization in the Fourier domain. However, their method is limited to a focal stack input. In this configuration, the problem is well conditioned because the input images already contain dense angular information and each constructed layer is associated to the focusing depth of one of the focal stack images. Hence, no regularization scheme was considered for this application. The method we propose is more generic as it can also construct the layers from subaperture images. Therefore, specific regularization strategies are studied, which allows us to address a much larger range of applications including calibration, view interpolation, denoising, etc.
Finally, the proposed FDL representation directly relates to the dimensionality gap Light Field prior described by Levin and Durand [20]. It states that the support of the Light Field data in the 4D Fourier domain is a 3D manifold which was later characterized as a hypercone in [28]. By additionally considering the limited depth range of a scene, Dansereau et al. [28] determined that the frequencydomain support of the Light Field forms a hyperfan. They define this shape as the intersection of the hypercone with a dual fan previously described in [29]. In this paper, we derive the FDL representation from the dimensionality gap prior assuming a discrete set of depths instead of a continuous range. For the discrete depth case, we show formally in Section IVA that this prior is itself derived from the assumption of a nonoccluded Lambertian scene. This is a limitation for any method directly enforcing the dimensionality gap prior. For instance, as observed in [20, 28], in the reconstructed subaperture images, occluding objects may appear transparent near the occlusion boundaries. However, semitransparent objects and reflections on flat surfaces are accurately reproduced, which is particularly challenging for depth image based rendering methods. We also present in Section VIIIB a possible generalization of our layer model to allow a better representation of other nonlambertian effects and occlusions.
Iii Light Field notations
Let us first consider the 4D representation of Light Fields in [1] and Lumigraph in [2] parameterized with two parallel planes, as illustrated in Fig. 2. The 4D representation describes the radiance along rays by a function where the pairs and respectively represent spatial and angular coordinates. For simplicity of notation, we consider a 2D Light Field with one spatial dimension and one angular dimension , but the generalization to a 4D Light Field is straightforward.
In this paper, we use the notion of disparity instead of depth. Given the depth of the focal plane in Fig. 2, a depth can be directly converted into a disparity with (i.e. objects at depth from the camera plane have zero disparity). Refocusing the Light Field then consists in defining a new Light Field . The refocus parameter is defined such that the regions of disparity in the Light Field , have a disparity equal to zero in the Light Field . A refocused image, noted , is then formed by refocusing the Light Field with parameter and integrating the light rays over the angular dimension as described in [4]:
(1) 
where represents the aperture of the imaging system. In the case where is captured by a camera with an aperture area (typically a disk), then is defined by:
(2) 
In this paper, we use a more general version of Eq. (1) where the image can be observed at any position on the camera plane with any given aperture :
(3) 
Note that when the aperture area is infinitely small, the function is equal to the dirac delta function . In this case, the image formed by Eq. (3) is the subaperture image noted and defined by .
The different notations used in the article are summarized in Table I.
Symbols  Description  



,  Respectively spatial and angular frequencies.  
Subaperture image at position ().  
Image with refocus parameter and position .  
Aperture function.  
Angular coordinate of the view to reconstruct.  
Angular coordinate of a known input view.  
Disparity value in the Light Field.  
Region of disparity in an image of view position .  
Dirac delta function  
Fourier transform of a function .  
Transpose of the matrix .  
Complex conjugate of (without transpose).  
Conjugate transpose of ().  
Element of on the row and column.  
column of matrix .  
element of vector . 
Iv Fourier Disparity Layer representation
Iva Light Field Prior and FDL Representation
For the derivations, we assume that the scene is lambertian, without occlusion, and can be divided into spatial regions with constant disparity . Formally, this can be written:
(4) 
Here, the spatial regions are defined for the central view at . From this assumption, we prove in Appendix A that the Fourier transform of the Light Field can be decomposed as follows:
(5) 
where is defined by
(6) 
Each function can be interpreted as the Fourier transform of the central view only considering the region of disparity . Hence, we call these functions Fourier Disparity Layers (FDL). One can note from Eq. (5) that the Light Field information is entirely contained in the Fourier Disparity Layers and their associated disparity values . We will show that the layers can be constructed either from subaperture or wide aperture images without knowing the regions . In other words, our method does not require a disparity map. However, the disparity values are necessary. Hence, a method for estimating these values is also presented in Section VB.
Another interpretation of Eq. (5) is that the Light Field is sparse in the Fourier domain and the nonzero values are located at frequencies such that for each disparity value of the Light Field. This is similar to the Light Field prior referred to as dimensionality gap prior in [20], and resulting in the definition of the hyperfan filter in [28]. However, these previous works consider a continuous disparity range, which does not allow for a practical layer representation, since it would result in an infinite number of layers.
IvB FDL Decomposition of Images
Subaperture images
Here, we derive the relationship between the FDL representation of the Light Field and the Fourier Transform of a subaperture image . Note that in , the Fourier Transform only applies to the spatial dimension. Hence, is obtained from the Light Field’s spectrum by applying the inverse Fourier transform in the angular dimension as follows:
(7) 
Using the FDL decomposition of from Eq. (5), the transformed subaperture image can be directly derived:
(8)  
(9)  
(10) 
General image model
Now, considering the general model in Eq. (3) the Fourier Transform of an image is:
(11)  
(12)  
(13) 
Using the FDL decomposition of from Eq. (10), we obtain:
V Construction of the Layers
The layer construction is performed from a set of images noted that follow the model of Eq. (3) (e.g. images captured by cameras on the same plane and oriented perpendicularly to this plane). Each image is associated with its angular coordinate , aperture function , and refocus parameter . Thanks to the genericity of this model, different forms of input data may be used. For example, in the case of a set of subaperture images, the aperture function of each image is equal to the dirac delta function . Note that, in this case, the refocus parameter has no influence in the equations (e.g. see Eq. (10)) and is not required. Another notable example is that of a focal stack input (i.e. images with the same angular coordinates and aperture functions , but with different refocus parameters ).
The layer construction problem is formulated in the general case in the next subsection, assuming all the image parameters are known. Then, in subsection VB, we show how to determine the positions of the views and the layer’s disparity values when the input data is a set of subaperture images.
Va Problem Formulation
The results of Eqs. (10) and (16) show that for a fixed spatial frequency in the Fourier domain, the images
can be simply decomposed as linear combinations of the Fourier Disparity Layers. Therefore, the FDL representation of the Light Field can be learned by linear regression for every coefficient in the discrete Fourier domain.
We first compute the discrete Fourier Transform of each image and we construct, for each frequency component , a vector such that . Given the disparity value of each layer (), we also construct the matrix with:
(17) 
By defining the vector with , the FDL decomposition in Eq. (16) is reformulated as . A simple layer construction method then consists in solving an ordinary linear least squares problem independently for each frequency component . In practice, however, this may be an illposed problem. It is typically the case when the number of available images is lower than the number of layers required to represent the scene (i.e. ). More generally, depending on the input configuration, and for some frequency components, the matrix may be illconditioned. This results in overfitting and extreme noise amplification when the layer model is used to render new images of the scene (e.g. view interpolation or extrapolation). In order to avoid this situation, we include a Tikhonov regularization term in the problem formulation:
(18) 
where is the Tikhonov matrix, and is a parameter controlling the amount of regularization. The problem (18), has a wellknown closed form solution:
(19) 
where is the Hermitian transpose operator (i.e. complex conjugate of the transposed matrix).
In our implementation of the layer construction, the Tikhonov matrix is defined according to the order view regularization scheme presented in subsection VC1 (see Eq. (29)). It encourages smooth variations between close viewpoints generated from the FDL model, which is intuitively a desirable property for a Light Field.
VB FDL Calibration
We additionally propose a calibration method that jointly estimates the layers’ disparity values and the angular coordinates of the input images . For simplicity, the calibration is restricted to the case of subaperture images with infinitely small aperture such that . In this case, Eq. (17) has a simpler expression . Note that in the general configuration where the aperture is unknown, both the aperture functions and the refocus parameters would also need to be estimated. However, that generalization is out of the scope of this paper and is left for future work.
In what follows we express as a matrix function such that, for a matrix , . The calibration problem can then be stated in a similar way as the layer construction problem (18) by treating the calibration parameters as unknowns in the minimization. However, these parameters do not depend on the frequency. Hence, the function to minimize is expressed as a sum over the frequency components ( is equal to the number of pixels in each input image):
(20) 
where the input view positions and the disparity values are arranged in the column vectors and respectively (). The vectors and contain the Fourier coefficients of, respectively, the disparity layers and the input images at the frequency (i.e. and ). Unlike the layer construction problem, the matrix is defined according to the order layer regularization approach detailed in subsection VC2 (see Eq. (30)).
In order to solve this problem, we perform a gradient descent along the vectors of parameters and . At each iteration, the current estimate of and is first used to update the layers values in each vector using Eq. (19). The layers values are then used to compute the gradients and of the objective function in Eq. (20) along and respectively.
By differentiation with respect to each element of and , one can show that the gradients are expressed as:
(21)  
(22) 
with , and each column of the corresponding gradient matrix is:
(23) 
where is the Hadamard product (i.e. elementwise multiplication), and is the imaginary part. Note that the matrix was previously constructed to determine , and can thus be reused in Eq. (23) to efficiently compute the gradients.
The computations are further accelerated by performing a stochastic gradient descent, where only a small subset of the
frequency components is selected randomly at each iteration for the gradients computation. In our implementation, subsets of 4096 frequency components were selected. Therefore, the computational cost per iteration does not depend on the image resolution.Finally, given the gradients, the updated vectors and are then computed as
(24) 
where is a small value encouraging the stability of the algorithm when the gradients become small (i.e. when and are close to an optimum). In our experiments a fixed value was used.
VC Regularization Schemes
In order to keep the layer construction and calibration problems in Eqs. (18) and (20) easy to solve, we have used a Tikhonov regularization. The definition of the Tikhonov matrix depends on the intended objective. In the most basic form, classical regularization is obtained by simply taking
equal to the identity matrix
. This prevents the values in (i.e. Fourier coefficients of the disparity layers) from taking too high values, which reduces the noise, but also results in a loss of details. Furthermore, for the calibration problem, using the regularization may not accurately estimate the disparity distribution of the Light Field as illustrated by the calibration results of Fig. 3. The sorted disparity values shown in Fig. 3(b) can be interpreted as an estimation of the cumulative disparity distribution.In this section, we present two schemes referred to as order view regularization and order layer regularization which are better suited to the layer construction and the calibration respectively.
VC1 Order View Regularization
For the layer construction, we want to encourage smooth variations between close viewpoints generated from the layers. For that purpose, at each frequency, we penalize the second derivative of subaperture images generated from the model with respect to the angular coordinate. From the expression of subaperture images in Eq. (10), the second derivative at a coordinate is given by:
(25) 
Let us now consider a set of angular coordinates to regularize. From Eq. (25), the order view regularization then consists in constructing a matrix where each row is associated to an angular coordinate and is defined by . For simplicity, we ignore the constant factor in the definition of since it can be accounted for in the regularization parameter . Intuitively, in order to apply the regularization at every coordinate in the camera plane, one would need to define a matrix with an infinite number of rows (i.e. infinite set ). Although this might seem impractical, we show in what follows that it can be done by observing that the solution to the regularized problem in Eq. (19) only requires the knowledge of of finite size .
First, let us take a continuous interval of size , instead of a discrete set. Then, each column must be interpreted instead as a function . Thus, each element of is given by the following inner product:
(26)  
(27)  
(28) 
For convenience, we replace in Eq. (19) by the scaled version , so that the amount of regularization remains independent of the size of the integration domain in Eq. (26) (i.e. area of the portion of the camera plane covered by the regularization). In our implementation of the order view regularization, we consider the full camera plane (i.e. ) by making tend towards the infinity. In this case, simply becomes a diagonal matrix such that:
(29) 
VC2 Order Layer Regularization
For the calibration step, however, it is preferable to use a regularization that does not depend on the parameters . Otherwise, the expression of the gradients would need to take the regularization term into account, which can result in more complex gradient computations and a slower convergence of the algorithm. Instead, we encourage smooth variations between successive layers by defining as a discrete approximation of the secondorder differential operator as follows:
(30) 
The
order layer regularization thus penalizes large differences between neighboring layers, which results in a more uniform distribution of the disparity values. As shown in Fig.
3(b), the calibration using the regularization (i.e. ) tends to find too many disparity values close to the dominant disparity (between 0 and 0.5 in the figure) and may underestimate other parts of the light field (e.g. with disparities close to 1). The order layer regularization attenuates that effect by encouraging a more uniform distribution. Note, however, that for the layer construction, smoothness along the layers is not desirable and produces artifacts along the edges in the images rendered from the FDL model.Vi Light Field Rendering
Via Implementation
Knowing the layers and their disparities, any view with arbitrary aperture and focus can be rendered in the Fourier domain by applying the FDL decomposition equation (16) and by computing the inverse Fourier transform. The interpretation in the spatial domain is that each layer is shifted (i.e. multiplied by in the Fourier domain) and filtered (i.e. multiplied by in the Fourier domain).
However, except for specific aperture shapes (e.g. square or dirac), the Fourier transform of the aperture function has no analytical expression. Hence, in our implementation, an approximation is obtained by drawing the aperture shape as an image, and by taking its discrete Fourier transform. A linear interpolation is used to determine from the discrete frequency samples. In order to increase the accuracy of this approximation, the aperture image is zeropadded before computing its Fourier transform, which increases the resolution in the spectral domain. The effect of the padding is illustrated in Fig. 4 showing that the resulting increased spectral domain resolution allows a better preservation of the contrasts in the rendered images. Note also that the spatial resolution of the aperture image can be taken arbitrarily large without affecting the aperture size in the rendered image. In practice, we control the aperture size by replacing in Eq. (16) by with a scaling factor . For example, taking simulates a subaperture image. On the other hand, taking a large factor produces a shallow depth of field effect without affecting the complexity of the method.
ViB Visual Results and Discussion
In Fig. 5, we show a comparison of our approach with the conventional Shift and Sum refocusing method [4]. Thanks to the regularization used in the layer construction (see Section VC), the light field is extended angularly, which allows rendering images with reduced angular aliasing as in Fig. 5(b). As shown in Fig. 5(c), the aperture can also be taken larger than the baseline of the input Light Field from which the FDL model is constructed. The possibility to change the shape of the aperture also gives better control over the bokeh (i.e. the quality of the outoffocus regions).
Another important and often overlooked factor influencing the bokeh is the color space of the input image data. The refocusing Eqs. (1) and (3) assume that the Light Field rays are proportional to luminance data. However, images are traditionally represented in a nonlinear color space (e.g. gamma corrected) in order to account for the nonlinearity of the human visual system as well as typical displays. This nonlinearity affects the appearance of the refocused images. More realistic bokeh can be achieved by applying inverse gamma correction to the input images before the layer construction, and by applying gamma correction back after the rendering step, as shown in Fig. 6. One limitation of this approach, however, is that computing the layers in a linear color space also produces more visible artifacts. This is due to the fact that solving the layer construction problem (18) in a linear color space does not take the nonlinearity of human perception of luminance into account.
A demonstration of our rendering application is available in the supplementary materials. It shows that our approach can be used for controlling simultaneously the viewpoint, the aperture shape and size, and the focus in realtime.
Vii Complexity
In this section, we analyse the complexity of the different processing steps proposed in our approach.
First, regarding the spatial resolution, our implementation takes advantage of the symmetry property of the Fourier transform for real signals. Given a real valued function , its Fourier transform is such that . Hence, for the layer construction and the rendering algorithms, only half of the discrete spectrum must be computed. The remaining frequencies are directly obtained by copying and taking the complex conjugate of the first half. This symmetry property is also used in the calibration step by selecting random frequencies only in one half of the spectrum for the stochastic gradient descent. Regarding memory requirements, using only half of the frequencies in the Fourier representation compensates the fact that complex numbers require twice as much memory as real numbers.
In addition to the spatial resolution, another important factor to consider for the complexity is the number of layers used. In order to analyse the number of layers needed for our model, we have performed an experiment where Light Field subaperture images are used for FDL calibration and construction with varying numbers of layers. The input images are then reconstructed via FDL rendering. Several Light Fields from various datasets were tested including natural Light Fields captured with a first generation Lytro camera (‘Totoro Waterfall’ [32]), a Lytro Illum camera (‘Fruits’ and ‘Figurines’ [33]), a Raytrix R8 camera (first frame of the Light Field ‘ChessPieces’ [34]), a traditional camera moving on a gantry (‘Lego Knights’ [31]), and synthetic Light Fields (‘papillon’ and ‘budha’ [30]). Fig. 7 presents the average Peak Signal to Noise Ratio (PSNR) of all the reconstructed subaperture images with respect to the number of layers. The saturation of the PSNR curves show that 15 to 30 layers are generally sufficient, and using more layers does not significantly change the results. Light Fields with a large baseline, and thus large differences between extreme viewpoints (e.g. Lego 5x5, Lego 17x17), typically require more layers than smaller baseline Light Fields. However a very large range of viewpoints also implies large occlusions, which results in a significant loss when reconstructing the subaperture images, as observed with Lego 5x5 and Lego 17x17. This makes the analysis difficult for very large baseline Light Fields which may still benefit from more than 30 layers in the ideal nonoccluded case. Note that for the small baseline Light Fields captured with Lytro cameras, the relatively low PSNR is due to inaccuracies in the input data (e.g. noise, color differences between views) which are reduced in our reconstruction. More details on the noise reduction as well as an extension for better occlusion handling are presented in section VIIIB.
The computing times for the different processing steps are presented in Table II for all the Light Fields in Fig. 7. The table also provides the number of iterations needed for the calibration since it has a large impact on the computing time. For this experiment, 30 layers were used although less layers would be sufficient for some of the Light Fields (e.g. see ‘ChessPieces’, ‘Figurines’, ‘Totoro Waterfall’ in Fig. 7). The processing times were measured using an Intel Core i77700 CPU and a Nvidia GeForce GTX 1080 GPU. All the proposed steps were implemented in Matlab using the parallel computing toolbox in order to process the different frequencies in parallel on the GPU.
Although the processing time for the calibration and layer construction steps is affected by the number of input views (i.e. angular resolution), the rendering time only depends on the spatial resolution and the number of layers. For example, the same rendering times is obtained for ‘Lego Knights’ using either 5x5 or 17x17 input views. This is particularly advantageous when rendering images with a large aperture size that are traditionally obtained by averaging a large number of subaperture images with the Shift and Sum algorithm [4]. Note that similarly to our method, the Fourier Slice refocusing algorithm [5] first transforms the Light Field in an intermediate representation in order to perform fast refocusing. In that method the refocusing complexity does not depend on the number of input views. However, since the intermediate representation is the 4D Fourier transform of the Light Field, the memory requirement remains proportional to the number of views. Furthermore, it does not allow changing the aperture or the viewpoint.
Input Light Field (resolution) 


Render  

Lego Knights (1024x1024x5x5)  9.2 s (184)  2.2 s  35 ms  
Lego Knights (1024x1024x17x17)  56 s (403)  7.2 s  35 ms  
Chess Pieces (1920x1080x5x5)  5.8 s (90)  4.3 s  53 ms  
Fruits (625x434x9x9)  5.1 s (70)  0.9 s  12 ms  
Figurines (625x434x9x9)  6.2 s (91)  0.9 s  11 ms  
Totoro WaterFall (379x379x7x7)  3.8 s (72)  0.4 s  7 ms  
papillon (768x768x9x9)  7.8 s (109)  1.9 s  20 ms  
budha (768x768x9x9)  8.1 s (113)  1.9 s  19 ms 
Viii Direct Applications
Viiia View Interpolation and Extrapolation
As shown in Fig. 5, our approach reduces the angular aliasing and allows extending the aperture size for an input Light Field consisting of a sparse set of subaperture images. Equivalently, the layer construction can be seen as a view interpolation and extrapolation method since subaperture images with arbitrary angular coordinates can be rendered from the FDL model.
For evaluating the view interpolation capability of the method, we have compared our results with the recent method of Vagharshakyan et al. [19] that enforces the sparsity of the epipolar images of the Light Field in the shearlet domain. For the comparison, the synthetic Light Field ‘papillon’ was used with several sampling configurations of the input views as illustrated in Fig. 8. The average PSNR of the interpolated views and the computation times are shown in Table III. The table also includes the results for two schemes combining the FDL and the Shearlet approaches. In the ‘Shearlet(full)+FDL’, the full Light Field is reconstructed with [19]. Then, a FDL model is computed from the input views and the previously reconstructed intermediate views. The final reconstruction is obtained with FDL rendering. The ‘Shearlet(border)+FDL’ scheme is similar, but only the views at the periphery of the Light Field are interpolated using [19]. Our experiments with the Shearlet method were performed using the author’s implementation of the epipolar image reconstruction with 100 iterations. The reconstruction of the full Light Field was performed by scanning the epipolar images in the ‘direct’ order as detailed in [19].
Method \ Input views  2x2  3x3  5x5  border 

Shearlet [19]  34.8 dB 8600 s  40.8 dB 9400 s  41.8 dB 11100 s  37.2 dB 6300 s 
FDL  36.8 dB 5.4 s  40.9 dB 7.8 s  42.9 dB 8.9 s  43.1 dB 8.8 s 
Shearlet(full)+FDL  35.5 dB 8600 s  41.4 dB 9400 s  42.6 dB 11100 s  40.1 dB 6300 s 
Shearlet(border)+FDL  38.4 dB 3200 s  42.0 dB 3200 s  42.7 dB 3200 s  x 
The results in Table III show that our FDL view interpolation performs particularly well in the ‘border’ configuration. The Shearlet method, on the other hand, does not fully take advantage of this configuration as it processes each epipolar image independently using only the input views located on the corresponding row or column in the Light Field. In the other configurations, a higher PSNR is also observed with the FDL method. However, when the input Light Field is too sparse (e.g. 2x2), it produces ringing artifacts as shown in Fig. 9. In this situation, the Shearlet method better preserves the edges than our FDL method, but it tends to blur the fine details and introduce color distortions, which explains the lower PSNR. The best strategy for such sparse inputs is then to combine the two methods with the ‘Shearlet(border)+FDL’ scheme which does not produce ringing artifacts. In comparison with the Shearlet method alone, this scheme better keeps the details and color consistency as shown in Fig. 9. It is also significantly faster since the reconstruction of the interior views is performed using our FDL approach which requires a negligible computing time compared to the Shearlet method.
ViiiB Denoising
Another direct application of our FDL construction algorithm is the Light Field denoising problem. Denoising is naturally obtained by constructing a FDL model from a Light Field, and by rendering images using the same angular coordinates as the input subaperture images. The noise is filtered in this approach thanks to the model definition that enforces the Light Field dimensionality gap prior [20] (see Section IVA). Note that for the same reason, the method also ensures color and illumination consistency along the angular dimensions. Hence it can serve simultaneously as a denoising and color correction method for Light Fields with variations of color and illumination between the subaperture images. In practice, this is particularly useful for Light Fields captured with plenoptic cameras (e.g. Lytro), as shown in figure 10. Additional results are also presented in the supplementary video.
However, for Light Fields with a wider baseline, our denoising may also introduce distortions due to the assumptions of nonoccluded Lambertian Light Field in the problem definition. In order to better cope with this type of data, we propose a relaxation of the FDL model. In the original FDL calibration (see Section VB), the shift applied to the layer to approximate the input subaperture image is given by the entry of the matrix . Since and are column vectors, the rank of the shift parameter matrix is equal to 1. Hence, the shift applied to each layer is proportional to the angular coordinates of the view to reconstruct. However, this property is not suitable for nonlambertian surfaces and occlusions. Therefore, in the relaxed FDL model, we remove the rank 1 constraint by searching directly for the matrix instead of and in the calibration algorithm. For that purpose, the gradient descent is applied using the gradients defined in Eq. (23). In order to obtain satisfying convergence, the matrix is initialized by the solution of the original rank 1 constrained problem. Once the matrix is known, the denoised result is obtained by constructing the layers and rendering the subaperture images using the shifts instead of . Note that, for the denoising application, we only need to reconstruct the input subaperture images for which the shift parameters were computed. Applying the relaxed model to the view interpolation application would also require interpolating the shift parameters for other view positions, which we do not consider in this paper.
Denoising results are presented in Fig. 11 for the Light Field ‘Tarot’ that exhibits strong nonlambertian effects. The reflections in the crystal ball are better preserved by the relaxed FDL model than the original one. For the comparison, the results obtained with the LFBM5D [35] and the Hyperfan 4D filter (HF4D) [28] are also shown. The best results were obtained with the LFBM5D method that completely removes the noise while preserving most of the details in the image. However, this method performs heavy processing and typically requires several hours for a Light Field.
The HF4D method is faster since it simply consists in multiplying the 4D Fourier Transform of the Light Field by a 4D filter. This hyperfan filter is designed to attenuate the frequencies outside of the theoretical region of support of the Light Field in the 4D Fourier domain, under the assumption of a nonoccluded lambertian scene. With comparable processing times, our FDL approach removes more noise thanks to the linear optimization used instead of the direct filtering in the HF4D method. Furthermore, our model can be more easily generalized for the case of scenes with occlusions or nonlambertian surfaces, as shown with the proposed relaxation.
Ix Conclusion and Perpectives
We have presented a new representation for Light Fields called Fourier Disparity Layers. The light information in the scene is decomposed into several layers, each corresponding to a depth plane parameterized by a disparity value. The decomposition step is formulated in the Fourier domain using a simple linear least square problem perfrequency, hence allowing fast processing with GPU parallelization. We have demonstrated the advantages of the FDL representation for several Light Field processing tasks. Those include real time rendering, view interpolation, denoising, as well as a calibration step that determines the angular coordinates of the input subaperture images along with an optimal set of disparity values for each layer.
The computational efficiency of our layer construction method coupled with its flexibility regarding the type of input images opens perspectives for an even larger range of applications. For example, our supplementary video presents results of FDL model construction from a focal stack, where the camera aperture and the focus parameter of each image are known. More generally, the method could even take advantage of a hybrid capture system taking images with different apertures, focusing depth, and varying positions on the camera plane. Furthermore, since our approach does not require a specific pattern of the input view positions, it can greatly simplify the capture of Light Fields by removing the need for the different viewpoints to be regularly sampled on a grid.
However, despite this flexibility, we have shown that the quality of the FDL model produced with our method depends on the input configuration. For example, while very accurate view interpolation is obtained from the subaperture images located on the borders of the Light Field, a too sparse viewpoint sampling may lead to ringing artifacts. Furthermore, for Light Fields with a large baseline, the FDL model is more likely to produce visible errors due to large occlusion areas or nonlambertian effects. In the paper, we have partially addressed these limitations. In the case of a too sparse viewpoint sampling, a combination of our view interpolation with a state of the art approach was shown to outperform either of the two methods taken individually. A better handling of occlusions and nonlambertian effects was also obtained, for the denoising application, using a relaxed version of the model.
In the aim of further extending the applicability of the FDL approach, future work may focus on generalizing the relaxed model to view interpolation, or including an additional prior directly in the layer construction (e.g. sparsity in the Shearlet domain similarly to [19]) to better cope with very sparse or very noisy Light Fields. A generalization of the calibration to wide aperture images would also be a valuable tool to facilitate the creation of Light Fields from less conventional input data such as focal stacks.
Appendix A Proof of sparsity prior
The spatial regions are defined for the central view at . The corresponding regions can also be defined for any view by .
The expression in Eq. (4) is then formulated as:
(31) 
In the assumption of a nonoccluded Light Field, the regions are such that , , and for any fixed , the sets are pairwise disjoint. Hence, the Fourier Transform of the Light Field is given by:
(32) 
Using Eq. (31), and by change of variable we obtain:
(33) 
By rearranging the terms the result is:
(34) 
where we define .
References
 [1] M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’96. ACM, 1996, pp. 31–42.
 [2] S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The lumigraph,” in Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’96, 1996, pp. 43–54.
 [3] J.X. Chai, X. Tong, S.C. Chan, and H.Y. Shum, “Plenoptic sampling,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, 2000, pp. 307–318.
 [4] R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a HandHeld Plenoptic Camera,” CSTR 200502, Tech. Rep., Apr. 2005.
 [5] R. Ng, “Fourier slice photography,” ACM Trans. Graph., vol. 24, no. 3, pp. 735–744, Jul. 2005.
 [6] T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and Chintan, “Spatioangular resolution tradeoffs in integral photography,” in Proc. Eurograph. Symp. Rendering, 2006, pp. 263–272.

[7]
S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and superresolution,”
IEEE Transactions of Pattern analysis and machine intelligence, vol. 36, no. 3, 2013. 
[8]
S. Pujades, F. Devernay, and B. Goldluecke, “Bayesian view synthesis and
imagebased rendering principles,” in
Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR)
, 2014, pp. 3906–3913.  [9] O. S.H. G. Chaurasia, S. Duchêne and G.Drettakis, “Depth synthesis and local warps for plausible imagebased navigation,” ACM Trans. Graph., vol. 32, 2013.
 [10] Z. Zhang, Y. Liu, and Q. Dai, “Light field from microbaseline image pair,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2015, pp. 3800–3809.
 [11] E. Penner and L. Zhang, “Soft 3d reconstruction for view synthesis,” ACM Transactions on Graphics (Proc. SIGGRAPH Asia), vol. 36, no. 6, 2017.
 [12] N. K. Kalantari, T.C. Wang, and R. Ramamoorthi, “Learningbased view synthesis for light field cameras,” ACM Trans. on Graphics (Proceedings of SIGGRAPH Asia 2016), vol. 35, no. 6, 2016.
 [13] J. Flynn, I. Neulander, J. Philbin, and N. Snavely, “Deepstereo: Learning to predict new views from the world’s imagery,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2015, pp. 5515–5524.
 [14] G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), July 2017.
 [15] Y. Yoon, H.G. Jeon, D. Yoo, J.Y. Lee, and I. S. Kweon, “Learning a deep convolutional network for lightfield image superresolution,” in IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), June 2015, pp. 57–65.
 [16] L. Shi, H. Hassanieh, A. Davis, D. Katabi, and F. Durand, “Light field reconstruction using sparsity in the continuous fourier domain,” ACM Trans. on Graphics, vol. 34, no. 1, pp. 1–13, 2014.
 [17] K. Takahashi, S. Fujita, and T. Fujii, “Good group sparsity prior for light field interpolation,” in IEEE Conf. on Image Processing (ICIP), Sep. 2017, pp. 1447–1451.
 [18] S. Vagharshakyan, R. Bregovic, and A. Gotchev, “Accelerated shearletdomain light field reconstruction,” IEEE J. Sel. Topics Signal Process., vol. 11, no. 7, pp. 1082–1091, Oct. 2017.
 [19] ——, “Light field reconstruction using shearlet transform,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 40, no. 1, pp. 133–147, Jan. 2018.
 [20] A. Levin and F. Durand, “Linear view synthesis using a dimensionality gap light field prior,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2010, pp. 1831–1838.
 [21] K. Kodama and A. Kubota, “Efficient reconstruction of allinfocus images through shifted pinholes from multifocus images for dense light field synthesis and rendering,” IEEE Trans. on Image Processing, vol. 22, no. 11, pp. 4407–4421, 2014.
 [22] A. Mousnier, E. Vural, and C. Guillemot, “Partial light field tomographic reconstruction from a fixedcamera focal stack,” CoRR, vol. abs/1503.01903, 2015.
 [23] F. Pérez, A. Pérez, M. Rodriguez, and E. Magdaleno, “Lightfield recovery from its focal stack,” J Math Imaging Vis, vol. 56, pp. 573–590, 2016.
 [24] J. R. Alonso, A. Fernández, and J. A. Ferrari, “Reconstruction of perspective shifts and refocusing of a threedimensional scene from a multifocus image stack,” Appl. Opt., vol. 55, no. 9, pp. 2380–2386, Mar. 2016.
 [25] G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar, “Layered 3D: Tomographic image synthesis for attenuationbased light field and high dynamic range displays,” ACM Trans. Graph., vol. 30, no. 4, 2011.

[26]
G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting,”
ACM Trans. Graph. (Proc. SIGGRAPH), vol. 31, no. 4, pp. 1–11, 2012.  [27] K. Takahashi, Y. Kobayashi, and T. Fujii, “From focal stack to tensor lightfield display,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4571–4584, Sep. 2018.
 [28] D. G. Dansereau, D. L. Bongiorno, O. Pizarro, and S. B. Williams, “Light field image denoising using a linear 4d frequencyhyperfan allinfocus filter,” in Proc. SPIE, vol. 8657, Feb. 2013, pp. 8657 – 8657 – 14.
 [29] D. Dansereau and L. T. Bruton, “A 4d dualfan filter bank for depth filtering in light fields,” IEEE Trans. Signal Process., vol. 55, no. 2, pp. 542–549, Feb. 2007.
 [30] S. Wanner, S. Meister, and B. Goldlücke, “Datasets and benchmarks for densely sampled 4d light fields,” in Vision, Modeling & Visualization, 2013, pp. 225–226, dataset available at http://lightfieldgroup.iwr.uniheidelberg.de/?page_id=713, accessed 01102018.
 [31] “The stanford light field archive,” http://lightfield.stanford.edu/lfs.html, accessed: 01102018.
 [32] X. Jiang, M. L. Pendu, R. A. Farrugia, and C. Guillemot, “Light field compression with homographybased lowrank approximation,” IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 7, pp. 1132–1145, Oct. 2017, dataset available at https://www.irisa.fr/temics/demos/lightField/CLIM/DataSoftware.html, accessed 01102018.
 [33] M. Le Pendu, X. Jiang, and C. Guillemot, “Light field inpainting propagation via low rank matrix completion,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 1981–1993, Apr. 2018, dataset available at https://www.irisa.fr/temics/demos/lightField/CLIM/DataSoftware.html, accessed 01102018.
 [34] “Light field video dataset captured by a r8 raytrix camera,” https://www.irisa.fr/temics/demos/lightField/CLIM/Datasets/RaytrixR8Dataset5x5/index.html, accessed: 01102018.
 [35] M. Alain and A. Smolic, “Light field denoising by sparse 5d transform domain collaborative filtering,” in IEEE International Workshop on Multimedia Signal Processing (MMSP 2017), Oct. 2017.
Comments
There are no comments yet.