Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging

Light Field (LF) offers unique advantages such as post-capture refocusing and depth estimation, but low-light conditions limit these capabilities. To restore low-light LFs we should harness the geometric cues present in different LF views, which is not possible using single-frame low-light enhancement techniques. We, therefore, propose a deep neural network for Low-Light Light Field (L3F) restoration, which we refer to as L3Fnet. The proposed L3Fnet not only performs the necessary visual enhancement of each LF view but also preserves the epipolar geometry across views. We achieve this by adopting a two-stage architecture for L3Fnet. Stage-I looks at all the LF views to encode the LF geometry. This encoded information is then used in Stage-II to reconstruct each LF view. To facilitate learning-based techniques for low-light LF imaging, we collected a comprehensive LF dataset of various scenes. For each scene, we captured four LFs, one with near-optimal exposure and ISO settings and the others at different levels of low-light conditions varying from low to extreme low-light settings. The effectiveness of the proposed L3Fnet is supported by both visual and numerical comparisons on this dataset. To further analyze the performance of low-light reconstruction methods, we also propose an L3F-wild dataset that contains LF captured late at night with almost zero lux values. No ground truth is available in this dataset. To perform well on the L3F-wild dataset, any method must adapt to the light level of the captured scene. To do this we propose a novel pre-processing block that makes L3Fnet robust to various degrees of low-light conditions. Lastly, we show that L3Fnet can also be used for low-light enhancement of single-frame images, despite it being engineered for LF data. We do so by converting the single-frame DSLR image into a form suitable to L3Fnet, which we call as pseudo-LF.



There are no comments yet.


page 1

page 3

page 4

page 6

page 7

page 8

page 9

page 10


Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

Light field imaging extends the traditional photography by capturing bot...

Learning to Synthesize a 4D RGBD Light Field from a Single Image

We present a machine learning algorithm that takes as input a 2D RGB ima...

A learning-based view extrapolation method for axial super-resolution

Axial light field resolution refers to the ability to distinguish featur...

LIGHTS: LIGHT Specularity Dataset for specular detection in Multi-view

Specular highlights are commonplace in images, however, methods for dete...

Dynamic Low-light Imaging with Quanta Image Sensors

Imaging in low light is difficult because the number of photons arriving...

CEL-Net: Continuous Exposure for Extreme Low-Light Imaging

Deep learning methods for enhancing dark images learn a mapping from inp...

SILVR: A Synthetic Immersive Large-Volume Plenoptic Dataset

In six-degrees-of-freedom light-field (LF) experiences, the viewer's fre...

Code Repositories


The project is the official implementation of our IEEE TIP Journal, "Harnessing Multi-View Perspective of Light Fields for Low-Light Imaging"

view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

(a) Captured LF
(b) Depth from captured LF
(c) Histogram equalized LF
(d) Depth from histogram equalized LF
(e) Our restored LF
(f) Depth from our restored LF
Fig. 7: Low-light is a severe bottleneck to Light Field (LF) applications. For example, the depth estimate of LF captured in low light is very poor. Our proposed method not only visually restores each of the LF views but also preserves the LF geometry for faithful depth estimation.

Images captured in low-light such as in the dark or night-time, not only lack the pleasing visual aesthetics but are also corrupted by high amount of noise and color distortions. This forthrightly hinders the performance of many computer vision algorithms

[1]. This plight of low-light conditions is not unique to conventional 2D photographs, but is also shared by Light Fields (LFs) [2, 3, 4, 5]. In contrast with a conventional camera, which captures only 2-D spatial information, a light field camera captures both 2-D spatial and 2-D angular information about the scene. Capturing the full light field allows us to perform post-capture controls such as digital refocusing and aperture control [3, 6]. It also enables easy scene depth estimation [7, 8, 9]. This has resulted in many LF applications such as view synthesis [4, 5, 10], structure from motion [11], pedestrian identification [12], reflection removal [13], and various other real-world application like autonomous driving and plant monitoring [14]. But, as stated previously, these LF applications are not immune to the challenges of low-light imaging. This is depicted in Fig. 7 where the depth estimation capability of LF is foiled due to low-light conditions. Commonly used techniques such as histogram equalization or having higher ISO does not help much in this regard as they boost the noise levels and introduce unwanted artifacts. Our goal, therefore, is to design a low-light LF restoration technique to mitigate these problems.

Low-light restoration is an ill-posed problem because of the large amount of noise present in the low-light signal. Also, color information is not present adequately. An LF, however, by capturing multiple views of the low-light scene, contains rich geometric cues about the scene. Harnessing the complimentary information spread across the LF views should help in reducing the ill-posedness of the problem and result in better visual reconstruction. In addition to this, the restoring process should also preserve the LF epipolar constraints to allow for subsequent tasks like depth estimation. The existing works on the low-light enhancement [15, 16, 17, 18, 19, 20] are, however, not designed to keep these points in consideration and so are not suitable candidates for enhancing low-light LFs.

We propose a two-stage deep neural network for Low-Light Light Field (L3Fnet) reconstruction. Stage-I of the L3Fnet operates on the full LF to first encode the LF geometry. This encoded information is then used as an auxiliary information in Stage-II for actual view restoration. Adopting such a two-staged architecture helps us to restore low-light LFs both aesthetically and with geometric correctness.

Training a learning-based model such as L3Fnet requires a low-light LF dataset but, unfortunately, there is no such dataset. An easy way out would have been to create a synthetic dataset using gamma correction, noise addition, or even retouching the images in software such as Adobe Photoshop and GIMP as was done for the single frame case [17, 21, 16, 20, 22]. But these are only proxy solutions, and as pointed out by Plötz and Stefan [23], benchmarking algorithms on synthetic dataset may not correlate with their performance on real-world data. We, therefore, collected our own Low-Light Light Field (L3F) dataset using commercially available Lytro Illum LF camera. L3F dataset was captured in the evenings when the visibility was below normal level. For each scene, shots were taken. For the first shot, optimal camera exposure and ISO settings were used to get the ground truth LF. Lytro Illum’s exposure was then reduced in definite proportions to capture three more low-light LFs. While taking these shots, much care has been taken to avoid moving objects such as passing vehicles and wavering leaves to prevent any registration issues. Camera shake is another cause of incurring registration problems and is much more pronounced for Lytro Illum. Lytro Illum, unlike modern single-frame DSLR cameras, does not allow remote connectivity for capturing data. So the only way to capture LF was by physically pressing the shutter button causing unintentional camera shake. To limit this we captured multiple sets of four LFs for each scene and employed additional measures to keep the camera rigidly fixed. We then did a manual check to identify the set exhibiting the least amount of camera shake. Using these techniques the maximum pixel shift in the captured LFs is about pixels.

While capturing the L3F dataset we were constrained to capture the ground truth for each scene. This was essential to train our network. Because of this constraint, we were unable to capture extremely low-light LFs. We, therefore, decided to forsake this restriction and collected a separate low-light LF dataset captured late in the night with near lux conditions at the camera lens. We call this dataset L3F-wild, which we use only for evaluation. L3F-wild LFs were captured with typical camera exposure of second and nominal ISO levels. Note that we do not use the standard practice of capturing images with a long exposure, of say seconds, in low-light conditions. This is to avoid possible motion blur and is a step towards fast low-light imaging. The L3F-wild dataset has a good amount of variation in scene brightness level, portraying a real-world scenario. We, consequently, propose a novel pre-processing block which when appended to L3Fnet makes it robust to such variations in light levels by automatically estimating an appropriate amplification factor.

We have already discussed why single-frame techniques are not suitable for enhancing low-light LF. And we now address the reverse question, can LF methods be used for single-frame DSLR images? The proposed L3Fnet has been specifically engineered for LF and so cannot be directly operated on single-frame images. However, we propose a novel pixel shuffling mechanism, which can convert any DSLR image into a pseudo-LF. Pseudo-LF has a form suitable to L3Fnet, and so can be enhanced using it. Later on, the enhanced pseudo-LF is transformed back into a single-frame DSLR image in a lossless fashion. This gives L3Fnet architecture a universal appeal for both LF and single-frame image low-light enhancement with restoration being more optimized for LF but maintaining a decent recovery for single-frame images.

To summarize, we make the following contributions:

  • We propose a two-stage deep neural network, L3Fnet, for restoring extremely low-light LFs.

  • We collected L3F dataset consisting of real LFs with varying levels of low-light, which can be used for training and evaluation of data driven methods.

  • We propose a novel pre-processing block that automatically adapts L3Fnet to changing light levels.

  • Our proposed Pseudo-L3Fnet framework enables L3Fnet to process even single-frame DSLR images for better enhancement in several cases.

Ii Related Work

LF processing algorithm:

Many techniques and models have been proposed for several LF related tasks. This includes tasks such as spatial super-resolution

[24, 25, 26, 27], deblurring [28, 29, 30], denoising [31, 32, 33, 34, 35], and depth estimation [7, 8, 9]. But, to the best of our knowledge, no prior work has considered solving the challenges involved in low-light LF imaging. We, therefore, outline some of the important works on LF denoising as denoising is a crucial part of low-light restoration. The easiest approach to LF denoising is to individually denoise each LF view using standard techniques like BM3D [36]. This, however, does not capture the 4D structure of LF and was addressed by Mitra and Veeraraghavan [31]

. By operating on LF patches, they modeled each 4D patch using Gaussian Mixture Models (GMM) subject to the patch disparity information. This way they provided a common framework for several LF tasks such as super-resolution and denoising. Dansereau

et al.[33]

observed that the LF of a Lambertian surface has a hyperfan-like shape in the frequency domain. By appropriately choosing the passband they tried to remove noise. Sepas-Moghaddam

et al.[34] treating LF akin to video frames, first converted LF into an epipolar sequence and then applied video-based techniques to denoise the LF. This 3D stacking of epipoles, however, has limitations in capturing the 4D nature of LF and was consequently addressed by LFBM5D [32], a popular LF denoising technique. LFBM5D does a realistic 4D LF modeling and is a natural extension to the then state-of-the-art denoising method BM3D. However, LFBM5D does not address the problem with low-light LFs because of its inability to enhance color.

Single-frame low-light enhancement: A significant amount of literature exists for low-light single-frame image enhancement. We briefly review some of them. LIME [15]

used a non deep learning optimization framework for low-light enhancement. It utilized the retinex theory

[37] to decouple the captured image into reflectance and illumination components for subsequent enhancement. A similar idea was used by Park et al.[16] and proposed another variational optimization-based retinex model. Ying et al.[38] also used the retinex model but combined it with the camera response model to preserve the naturalness of the enhanced image. Deep learning based methods employing encoder-decoder architecture have also been recently used for low-light enhancement [20, 17, 18, 19]. Amongst all these recent works, the work by Chen et al.  [19] is a landmark paper on low-light reconstruction and is closest to our work on low-light restoration. Other recent works [17, 18, 20] also aim at low-light but their main objective is the enhancement of dim images. These images already had a good representation of the target scene and only lacked in aesthetics and contrast but had a decent amount of scene visibility. This is an interesting problem to solve but in our work we target data with much lower visibility, see Fig 25. Moreover, they worked on synthetic data where the images were darkened with global gamma operation or retouched in photo-editing software for well-lit ground truth.

Fig. 8: Sample center-view images from the Low-Light Light Field (L3F) dataset consisting of scenes. For each scene, we capture a LF at near optimal exposure setting and at three other exposure settings with the exposures being and of the optimal setting. Refer supplementary material to view the full dataset.

Iii L3F Dataset

Creating a low-light LF enhancement dataset is a major challenge in designing learning-based solutions. Synthetic low-light LF data can be created using gamma correction and noise addition [21, 16, 20, 22]

, however, such techniques act globally and so do not mimic a real low-light situation which severely affects some regions more than others. Also, in synthetic data noise is generally modeled as Gaussian or Poisson distribution which may not hold true in real low-light data. The other popular technique is to retouch the low-light images in photo-editing software by trained photographers to obtain the ground-truth

[17, 18]. Such proxy solutions are not suitable for evaluating low-light LFs because any local edit in a LF view needs to be propagated to all the other SAIs which is difficult to enforce manually. Thus, different from recent methods that adopt such techniques, we introduce a Low-Light Light Field (L3F) dataset containing both low-light LF and the corresponding ground-truth LF. To the best of our knowledge, this is the first dataset available for training and benchmarking low-light LF enhancement techniques.

Fig. 9: L3Fnet architecture: The proposed architecture consists of a Global Representation Block (Stage-I) and a View Reconstruction block (Stage-II). Stage-I operates on the full low-light LF to obtain a latent representation that encodes the LF geometry. This latent representation is then used by Stage-II to restore each LF view.

All the LFs in the proposed L3F dataset were captured in outdoor situations. The scene content predominantly includes an outdoor urban campus environment with lambertian and non-lambertian surfaces at varying depth and occlusion levels. The captured LFs do not include any dynamic objects in the scene such as moving people and vehicles. All the LFs were captured in the evening with limited natural light where the typical illuminance was between Lux to Lux.

We use the Lytro Illum Camera to capture all the LFs. For each scene we captured four LF data. One of them was captured using the least ISO and appropriate exposure settings to make it look like a well-lit image. This we use as the ground truth reference LF for that scene. The other three LFs were captured by reducing the exposure setting to , and of the reference LF exposure time. For brevity, we refer them as L3F 20, 50 and 100 datasets. Chen et al.[19] limited their work to approximately setting but we go even more low-light with L3F-100 dataset for better understanding and contrasting the effect of low-light on LF reconstruction.

Motion blur caused due to camera shake is a common artifact when capturing long-exposure sequences. To limit this we mount the Lytro camera on tripod and capture the LFs using timer mode. Unlike DSLR cameras the Lytro Illum camera does not allow remote capture which makes the data capturing process harder and laborious. Consequently, the four LFs captured for the same scene exhibits small spatial misalignment. To contain this we capture multiple sets of these four LFs for a target scene and choose the set exhibiting the least alignment problem. No subsequent alignment operation was performed on the LFs. The small misalignment in L3F dataset is taken care by our proposed method.

The images were captured in the Light Field Raw (LFR) format of the Lytro Illum camera. The resolution of each LFR file is pixels. The raw images are captured in the Bayer sensor pattern where the pixels are hexagonally packed. We use Light Field Matlab Toolbox [35] to demosaic, decode, devignetize and color-correct the LFR files. The decoded D LF data have a resolution of , where represents the angular resolution, represents the spatial resolution of each view and corresponds to the RGB channels.

The proposed dataset contains a total of LFs organized into sets. Each set corresponds to a unique scene with LFs captured at various exposure settings. Fig. 8 shows a few sample LF center-views from the proposed dataset. of the dataset i.e., sets ( LFs) forms the test set.

While capturing L3F20, 50, and 100 datasets, we were constrained to capture the well-lit LF image also, which is required for training the L3Fnet. Forsaking this restriction, we captured even darker low-light LFs taken late in the night. While capturing this dataset, the Lux measure at Lytro Illum’s lens was almost nil, and so no ground truth was possible for these LF images. We call this dataset L3F-wild. Although L3F-wild cannot be used for training, the performance of methods trained on L3F20, 50, and 100 dataset can be checked by evaluating on L3F-wild dataset. L3F-wild dataset is a step forward towards fast low-light imaging because it does not adopt the standard practices of low-light imaging such as prolonged exposure or high ISO. Long exposures like seconds cause a lot of motion blur, and high ISO boosts the noise. L3F-wild was, however, captured with a typical exposure time of second and low ISO values around . In the experimental section and supplementary, we demonstrate the effectiveness of L3Fnet on both L3F-100 and L3F-wild datasets.

Iv Low-Light Light Field Network (L3Fnet)

Our proposed method is designed to enhance LFs captured in low-light. But before describing the details of the proposed solution, we highlight some of the desired characteristics that the proposed solution should have. These characteristics are not specific to any particular task but essentially are features sought-after in any LF architecture. These features were cataloged after studying different architectures for various LF related tasks, based on which we designed our algorithm. Firstly, L3Fnet should not make strong assumptions about the scene, so that, given any low-light LF we should be able to restore it. This is contrary to works like [24, 39], which required explicit geometric information of the scene, such as depth-map, to perform the LF related task. Secondly, L3Fnet should not reconstruct the LF views independent of each other. This may lead to visually pleasing reconstruction, but the LF epipolar geometry would not preserved. This was noticed in work like [27], where only pairs of LF views were fed to the CNN model, failing to capture the high view coherence of the LF [25]. The third desired characteristics is view parallelization that essentially means that should be able to reconstruct each view parallelly. This is contrary to the approach taken by LFNet [25], which used bidirectional recurrent units to model the inter-view dependencies. But as pointed out in [26], the dependencies were not modeled well, and the algorithm was slow because of sequential processing over time. So if possible, recurrent units should not be used in L3Fnet for potential view parallelization. View parallelization also has the advantage that we can selectively reconstruct only some of the desired LF, without reconstructing the full Light Field, saving time and memory. Lastly, we do not want multiple architectures with multiple sets of weights for view reconstruction. This was adopted by [26] to retrain their network for each Sub-Aperture Image (SAI) and consequently have a separate set of weights for each LF view. We, on the other hand, would want to avoid this and have a single architecture with shared weights for all LF views. We have summarized this discussion in Table. I.

No strong View View Shared weights
priors coherence Parallelization & architecture
Wanner et al. [24]
Yoon et al. [27]
LFNet [25]
Zhang et al. [26]
Ours L3Fnet
TABLE I: Comparison of architectural features present in L3Fnet.

We have incorporated these features in the design of L3Fnet to the extent possible without harming the primary purpose of enhancing low-light LFs. L3Fnet admits no strong assumption about the target scene because it requires no extra information other than the low-light LF for enhancement. To achieve view-coherence, we have a Global Representation Block (GRB), which operates on the entire LF to obtain a latent representation that encodes the LF epipolar geometry. To demonstrate the view-coherence property we show the epipolar and depth-estimation results in the experimental section. While GRB is good to incorporate view-coherence, it defies the view parallelization property. To compensate for this, L3Fnet adopts a two-stage architecture. The second stage operates on each LF view individually and using the encoded information from Stage-I (GRB) enhances each SAI independently. We elaborate on this in much detail after we fully describe the L3Fnet architecture. Finally, L3Fnet shares the architecture and weights amongst all LF views.

Iv-a Network Architecture

Stage-I Global Representation Block (GRB): Given an input LF , we first stack all the views across channels to obtain

. Such a represenation can be processed using a 2D Convolutional Neural Network (CNN) to obtain a low-dimensional LF representation useful for any down-stream task.

In the Global Representation Block (GRB) a convolutional layer is used to reduce the input channel dimensions of , see Table II.


To now extract useful information, we further process this representation using (fixed at ) residual blocks [40] to obtain the global representation


where denotes the m residual block in the GRB. This feature map is then fed to the final convolutional layer as a post-processing step


By processing all the views together using a CNN architecture the network captures the implicit LF structure such as disparity, sub-pixel information etc., relevant for the final task at hand. This global representation is then used by the view reconstruction block as an augmented information to reconstruct any of the input view.

Stage-II View Reconstruction Block (VRB): While the GRB representation is too coarse and common to all SAIs, for better reconstruction of each SAI we explicitly use its immediate neighbours. Formally, to reconstruct a particular view , its neighbours are stacked across channels to obtain . The view restoration process begins by processing using a convolutional layer to obtain the feature map :


The global feature map is concatenated with this before being fed to (fixed at ) residual blocks:


where denotes the n resblock in the view reconstruction block and denotes the concatenation operation. The resblocks in GRB and VRB branches share the same structure. Finally, the output feature map is passed through a transposed convolution block followed by a long skip connection from the input SAI to restore the SAI


Many works [41, 42] have reported difficulty in training very deep networks by simply stacking residual blocks. Therefore, to stabilize the training and optimization of deep networks, adding a long/global skip connection has become a standard practice[41, 43]. Further, unlike the short skip connections which help in propagating finer details, long skip connection helps to transmit coarse level details which are crucial for image restoration tasks.

Fig. 10: The Histogram Module computes the RGB histogram of a low-light LF and outputs an amplification factor . Normalizing the dark LF with this module allows L3Fnet to process images of varying low-light levels mitigating the over/under saturation problem.

One of our design objectives was to decouple the reconstruction of an LF view from that of others. This gives the flexibility to choose precisely which LF views need to be reconstructed, saving time and memory to a great extent. For example, to speed up training and simultaneously reduce the model size on GPU, instead of enhancing all LF SAIs, we randomly chose any (fixed at 12) SAIs and reconstructed them parallely in Stage-II for each training iteration. All these SAIs used the same global encoding produced at the end of Stage-I. This global encoding is also obtained in a single feed-forward pass of Stage-I avoiding any kind of recurrence, which consumes more time. At test time was set to the full angular resolution of the LF.

Branch Name Type K S Out
Stage-I Global Representation Block Conv-2D 7 1 64
Conv-2D 3 2 128
Res-Block 3 1 128
Res-Block 3 1 128
Res-Block 3 1 128
Res-Block 3 1 128
Conv-2D 1 1 64
Stage-II View Reconstruction Block Conv-2D 7 1 15
Conv-2D 3 2 64
Res-Block 3 1 128
Res-Block 3 1 128
Res-Block 3 1 128
Res-Block 3 1 128
Res-Block 3 1 128
Res-Block 3 1 128
Transpose-2D 2 2 128
Conv-2D 3 1 3
FC - - 200
FC - - 100
FC - - 50
FC - - 1
TABLE II: L3Fnet architecture summary. stands for Kernel Size,

for Stride, Out for number of channels in Convolutional layers.

Histogram Module: The typical illuminance of the scenes in our dataset varies from 0.1 lux20 lux. This causes a lot of variation in the input data distribution. A simple workaround is to scale the input image with an appropriate amplification factor which controls the brightness of the image. Chen et al. [19] requires a manual input for . Contrary to this, we automate the process by pre-processing the low-light LF with the proposed Histogram Module shown in Fig. 10.

To estimate the appropriate amplification factor , we make use of the RGB histogram of the input low-light LF image . Formally, let be the normalized -bin histogram of the input image corresponding to the color channel . This histogram is then fed to fully connected layers to compute the amplification factor . is then used to appropriately scale the input low-light LF:


where denotes concatenation. Scaling the input image with the amplification factor acts as an important pre-processing step. Without this pre-scaling step the restored LF would sometimes tend to be over or under-saturated for light levels of different gradations.

Loss Function:

For the proposed solution we try to minimise the following loss function:


where is the contextual loss [44], are the weights of all the trainable parameters and is the ground-truth view of the LF. The first component in our loss function performs a dense matching (i.e., pixel-to-pixel) between the restored and the ground truth LF to recover the structure and color information. Since recovering the precise color information from a low-light signal is a hard problem, a small shift from actual hue information is expected, no matter how long the network is trained. So, in a bid to prevent our loss function from having large errors due to this mismatch, we chose the sum-of-absolute-deviation ( loss ) over other alternatives such as loss to perform the dense pixel matching. But as our dataset has small misalignment, directly using the loss would lead to blurred outputs. To prevent this we additionally used a low weighted contextual loss to handle this problem. Our ablation studies nicely demonstrate the importance of both components in our loss function.

V Pseudo-LF: Extending L3Fnet to single-frame DSLR images

Fig. 11: Pseudo-LF: Extending the proposed L3Fnet to single- frame DSLR images. The figure shows the subsampling procedure to convert a DSLR image into a pseudo-LF. Pixels in the same color belong to the same pseudo-SAI.

The proposed L3Fnet has been engineered for Light Fields and, consequently, so far has been delineated in the context of LF reconstruction. On contrary, we now describe the usage of L3Fnet for single-frame DSLR image reconstruction by introducing a new pipeline called pseudo-L3Fnet pipeline. This is illustrated in Fig. 11.

The first step in the pseudo-L3Fnet pipeline is to convert the DSLR image into pseudo-LF. To do this, the high-resolution DSLR image is divided into blocks of pixels. The pseudo SAI, where , is then obtained by collecting together the pixel from each pixels block. We use the term pseudo because the resulting pseudo-LF SAIs have no real disparity and only happen to be shifted subsampled versions of the input high-resolution DSLR image, with a maximum shift of pixels. The converted pseudo-LF, and not the original DSLR image, is then processed by the L3Fnet to obtain well-lit pseudo-LF. The resulting well-lit LF is finally converted back to well-lit DSLR image by reversing the sampling process described just now. Recently Gu et al. [45] and Shi et al. [46] have also used a similar pixel shuffling technique for better performance.

LFBM5D [32] Chen et al.[19] Proposed L3Fnet GT
Fig. 12: Visual and epipolar comparisons at fraction of the optimal exposure setting (L3F-100 dataset). We observe that the method by Chen et al.  finds it hard to preserve the epipolar geometry of LF giving blurry results. LFBM5D produces sharper results but the intensity/color is not recovered well.

The proposed pseudo-L3Fnet pipeline may appear a workaround to fit L3Fnet in the DSLR image framework, but it has some inherent advantages. L3Fnet stacks all pseudo-SAIs channelwise. So with this arrangement, even a small convolution kernel of size can simultaneously look at pixels of the single-frame DSLR space, which increase the receptive field of pseudo-L3Fnet to a great extent. This helps in mitigating artificial shocks and staircasing effects [47, 48] in restored images. For example, consider and . This corresponds to a kernel size of with pseudo-SAIs stacked as channels (the DSLR image is decomposed into pseudo-SAIs). So, each filter in pseudo-L3Fnet has access to pixels of the DSLR image, which is the reason behind the large receptive field of our network. To do similar operation in the DSLR space, we require a kernel size of , which for corresponds to kernel size of with a stride of (shifting the kernel by one pixel in pseudo-SAI is equal to shifting by pixels on DSLR image). With such a large stride each convolution operation will down-sample the feature map by a factor of , which will be difficult to up-sample at the decoder end. Anyway, our main objective here is to highlight that L3Fnet can be used for both LF and single-frame images.

LFBM5D [32] Chen et al.[19] Proposed L3Fnet GT
Fig. 13: Visual and epipolar comparisons at fraction of the optimal exposure time (L3F-20 dataset). Since the L3F-20 dataset is not as dark as L3F-100, the difference in enhancement by different methods is not as pronounced as in the case of L3F-100. However, slight amount of blurriness can be seen in results by Chen et al.  and LFBM5D.
Camera LFBM5D Chen et al. Our LFBM5D Chen et al. Our
Exposure [32] [19] L3Fnet [32] [19] L3Fnet
TABLE III: Quantitative comparison of the proposed L3Fnet averaged over the L3F-20, L3F-50 and L3F-100 test datasets. The performance gain of L3Fnet over other methods is more for the extreme low light case of exposure.

Vi Experimental Results

Vi-a Implementation details

Data Pre-processing: Recent works on single-frame low-light image restoration [49, 19] chose to work with raw format. This is because the raw data directly stores the signal measured by the sensor and so is immune to any loss of information in the pre/post-processing steps of the camera pipeline. However, there are few difficulties with directly working on raw LF format produced by Lytro Illum. First, the raw LF images necessarily require a significant amount of pre-processing [35] involving tasks such as alignment rectification, hexagonal to orthogonal lattice conversion, de-vignetisation, and demosaicing [35]. We wanted L3Fnet to focus on low-light restoration rather than learning these difficult geometric transformations for which already efficient techniques exist. So we chose to work with decoded JPEG images with the highest quality factor of 100. From the training time and resource utilization perspective also this is beneficial because the JPEG compression reduced the decoded image size form massive to , which is still much larger than raw single-frame DSLR data.

Lytro Illum captures 225 views of a scene arranged in a grid. As the peripheral views are affected by ghosting and vignetting effect [26, 25], we choose to work with the central views. The L3Fnet is trained for iterations. The number of SAIs sampled during training for Stage 2 (View Reconstruction Block) of the model is fixed at . But, is only a small fraction of SAIs to be processed. This may give the impression that the model will not learn for all SAIs. The model nevertheless, runs for a large number of iterations, with randomly chosen views in each iteration. Hence all views are likely to be equally attended in total samplings that network performs during training. At test time all SAIs are chosen to restore the full angular resolution.

Scene I Chen et al.[19] Proposed



Scene II Chen et al.[19] Proposed



Fig. 14: Depth estimation from reconstructed low-light LFs for low-light (L3F-20) and extreme low light (L3F-100) cases: Depth maps produced from restored LFs by Chen et al. are good for L3F-20 but very poor for the case of L3F-100 dataset because it is hardly able to preserve the epipolar geometry. On the other hand, our method is able to preserve the epipolar geometry even in the extreme low-light case.

Loss Function: As loss is more crucial than the contextual loss [44], which is confirmed in our ablation studies, we set and for first iterations. After this is reduced to . For contextual loss, we used the feature maps at the , and layers of VGG19 [50]. We tried to include initial layers also but they hardly added to the performance gain. On closer inspection loss due to initial layers of VGG was similar to loss which is expected. Due to the small receptive field and a shallow channel depth of initial layers, they fail to bring about novel interactions amongst the input data and so yield no more information than direct loss between the ground truth and reconstructed views. is fixed to fo all iterations.

Data augmentation: To augment the data in the training phase, we use horizontal flipping, vertical flipping, and color augmentation. Color augmentation is achieved by swapping the color channels in random order. The training is done on patches of size and full spatial resolution is used at the time of testing. In each iteration, the patch location is chosen randomly within each sub-aperture view.

Baseline: We compare our L3Fnet with a LF denoising technique LFBM5D [32] and a single-frame low-light enhancement work by Chen et al.[19]

. As LFBM5D is a denoising technique, we pre-process it by scaling it suitably for better color restoration followed by denoising. LFBM5D additionally requires an estimate of the noise variance. For this, we took small texture-less patches from low-light LF central SAI to estimate the noise variance.

Chen et al. for single-frame images chose to work with raw format and performed an end to end training. It is however not suited for raw Lytro LF images. For reconstruction using Chen et al. method, it has to be independently operated over each SAI which is like a single-frame image. The SAIs are however not readily available in raw LFR format and needs to decoded using the Light Field Matlab Toolbox [35]. Also for a fair comparison with the work of Chen et al., we do not use the proposed Histogram Module as this was lacking in their method. We instead train L3Fnet and Chen et al. method independently on L3F-20, L3F-50 and L3F-100 dataset so that over/under-saturation problem does not occur. In a different experiment, we demonstrate the automatic adjusting capability of L3Fnet by training and testing on all three sets merged together. Since each view has a resolution of and we use randomly selected patches of , along with data augmentation techniques we had sufficient training data. Further to emphasis that the performance gain of L3Fnet is due to its unique architecture engineered for LF, Chen et al.  method was trained on their original loss function consisting of just loss as well as our loss function consisting of both and contextual losses. Chen et al. method gave better results with our loss function and data augmentation techniques and we have used this version in our comparisons.

Vi-B Visual and Geometric Reconstruction Comparisons

(a) Exposure Time:1/15 s, ISO:80
(b) L3Fnet-20 restoration
(c) L3Fnet-50 restoration
(d) L3Fnet-100 restoration
(e) L3Fnet- restoration
(f) Exposure Time:1/5 s, ISO:100
(g) L3Fnet-20 restoration
(h) L3Fnet-50 restoration
(i) L3Fnet-100 restoration
(j) L3Fnet- restoration
Fig. 25: Results on the L3F-wild dataset. The dark LF images have been pre-processed with the proposed Histogram Module allowing L3Fnet to adapt to different levels of low-light. For example, in the top row, L3Fnet trained on L3F-20 dataset (denoted as L3Fnet-20) fails to achieve ambient brightness. Likewise, when the light levels are increased for test image in the second row, L3Fnet-100 oversaturates. L3Fnet-, however, adjusts to both lighting conditions using the Histogram Module. Check supplementary for more visual results.

We first test the proposed L3Fnet for the visual reconstruction of LF images on the L3F 20, 50, and 100 datasets. The corresponding results are shown in Fig. 12 and Fig. 13 and the average PSNR/SSIM values are mentioned in Table. III. Regardless of the method, the first observation is that the reconstruction becomes increasingly difficult as the light level decreases. Secondly, the proposed L3Fnet does a better reconstruction in all three datasets but is highly appreciated for the extreme low-light case of L3F-100 dataset. Especially for the L3F-100 dataset, both LFBM5D [32] and Chen et al.[19] struggle to regenerate the finer details.

The reconstructed image may look aesthetically pleasing but the LF geometry might be destroyed. To safeguard against this, Fig. 13 and Fig. 12 also show the epipolar comparisons. We additionally show the depth estimates for L3F-20 and L3F-100 dataset in Fig. 14. The method proposed by Jeon et al.[7] is used for depth estimation. We again observe the aforementioned observations. Depth reconstruction becomes more and more difficult for lower light and depth estimates for L3Fnet reconstructed images are closer to the ground truth.

For the L3F-20 dataset, depth estimates from Chen et al.  method are good but it misses the finer edges by coagulating them together giving a blurry finishing. For example the spokes of the bike’s tyre in scene I are clearly demarcated in L3Fnet’s depth estimates but not in Chen et al.. Likewise, for scene II the sharpness of the board corners is much better preserved in our results. The same observations are better highlighted in L3F-100 results.

Exposure fraction w.r.t ground truth exposure range
(L3F-20 dataset)
(L3F-50 dataset)
(L3F-100 dataset)
TABLE IV: Variation caused in values by the histogram block on the test dataset. The predicted amplification factor increases monotonically with decreasing light level.

Vi-C L3Fnet for LF captured in Wild

Stage-I Stage-II L1 loss CX loss PSNR/SSIM
TABLE V: Ablation study to examine the contribution of Stage-I in L3Fnet model and the two loss functions: L1 and CX.

This section substantiates the importance of the istogram module by showing results on the L3F-wild dataset. These images were captured late in the night and so no ground truth exists for them. There was no way to obtain nice well-lit LF images for such scenes, either by changing the ISO or the exposure settings. Playing with these settings would brighten the image but with lots of unnatural artifacts. These images were captured with typical exposure times () and normal ISO values at .

Since ground truth is not available for these images it cannot be used for training L3Fnet. We instead re-train L3Fnet with L3F 20, 50, and 100 datasets merged together but pre-processed with the Histogram Module to estimate the desired amplification factor . The weights of the Histogram Module and L3Fnet were learnt together in a end-to-end fashion. We call this trained network as L3Fnet-. In Table IV we report the range of values predicted by L3Fnet- on the test dataset for various exposure settings. For ease in notation and brevity, we likewise refer L3Fnet trained on L3F 20, 50, and 100 datasets as L3Fnet-20, L3Fnet-50, and L3Fnet-100, respectively. We then tried to reconstruct the L3F-wild LF images. In Fig. 25, we show a night time scene captured using different ISO and exposure settings. We can easily notice the over/under saturation artifacts in LFs restored by L3Fnet-20, L3Fnet-50 and L3Fnet-100. The reason for these failures can be attributed to the fact that these networks are agnostic to image statistics and hence can not adapt to variations in illumination condition. But L3Fnet avoids this problem to a large extent with the help of the Histogram Module. More results are shown in the supplementary material.

Vi-D Ablation Study

Fig. 26: Visual reconstruction results for ablation study on L3Fnet. Net-I produces blurry results since it uses only loss. Net-II restores the sharpness by instead using the contextual loss but introduces a large amount of color artifacts. Net-III solves both problems by incorporating both loss functions, but because of the absence of Stage-I it could not capture the LF geometry leading to inferior results than those obtained by the proposed L3Fnet. The difference is better corroborated by comparing depth estimations results in Fig. 27. Refer Table V for PSNR/SSIM values.
Only Stage-I Stage-I + Stage-II GT
(Net-III) (L3Fnet)
Fig. 27: Depth estimates from reconstructed LF with and without Stage-I of the L3Fnet model. Stage-I captures the LF geometry and hence it helps in producing better depth maps.

In this section, we perform an ablation study to further examine the contributions of each module in L3Fnet. To do this, we trained variants of the proposed L3Fnet on the L3F-100 dataset called Net-I, II, and III. The details of these networks can be found in Table V. Net-I, which does not include the contextual loss, was unable to handle small misalignment present in the dataset and produced slightly blurry results. This is expected because the loss is known to produce blurry results for non-aligned data. In the next experiment, Net-II does not include the loss but only the contextual loss. While Net-I showed a small dip in performance due to blurriness, Net-II performed very poorly. The reason is without L1 loss, the network could not learn the correct colors, and the results showed an enormous amount of color distortion. We, therefore, first tried to restore the color and basic geometry by giving much higher importance to loss in the first iterations and subsequently reduce it to let contextual loss finetune the result. With this understanding, it is evident that both components are required in the loss function.

While Net-I and II compared the importance of different components in our loss function, Net-III highlights the importance of Stage-I, i.e., the the Global Representation Block (GRB), in the L3Fnet model. Without Stage-I, L3Fnet was not able to preserve the epipolar constraints of LF. Fig. 27 shows that the depth map obtained from the LF restored by Net-III is inferior to that obtained from the LF restored by L3Fnet which contains both stages.

Vi-E Pseudo-LF for single-frame image reconstruction

In this section, we evaluate the proposed pseudo-L3Fnet pipeline for processing single-frame low-light DSLR images. The purpose here is not to surpass low-light DSLR reconstruction methods but to highlight the universality of L3Fnet to enhance both LF and single-frame images. For pseudo-LF, we chose to decompose DSLR images into pseudo-SAIs, where was set to .

Similar to the L3F-100 dataset, we capture extreme low-light DSLR images of which were reserved for testing. We process JPEG images only as the raw format did not give much improvement but instead had much slower convergence because of additional learning of the Bayer demosaicing pattern. We used all the data augmentation techniques mentioned for L3Fnet with patch-wise training.

We compare our pseudo-L3Fnet pipeline results for single-frame reconstruction with that of Chen et al.. We also create a new baseline with the same number of residual blocks present in pseudo-L3Fnet. Pseudo-L3Fnet has 10 residual blocks (4 in Stage-I and 6 in Stage-II), and so the new baseline has 10 residual blocks stacked end-to-end with a long skip connection from input to output. The new baseline has approximately the same capacity and number of layers present in pseudo-L3Fnet. The difference is that input to the new baseline is the actual DSLR image, while the input to pseudo-L3Fnet is DSLR image converted to pseudo-LF.

Ground Truth Baseline Chen et al.[19] Pseudo-L3Fnet
Fig. 28: Single-frame low-light DSLR restoration using the proposed pseudo-L3Fnet pipeline. BRISQUE and NIQE are recent perceptual metrics for single-frame DSLR images. Lower are these metrics, better is the perception. Check supplementary for more results.

Some of the reconstruction results are shown in Fig. 28. The baseline and the method by Chen et al. do good at denoising but very poorly on preserving the spatial smoothness, which is visible by the presence of color blobs spread all over the image. On the contrary, pseudo-L3Fnet obtains much gradual transitions mitigating the color blobs problem. The average PSNR(dB)/SSIM values on the test dataset of images for Chen et al. is , for baseline is and for the proposed Pseudo-LF is . We also quantified the quality of the restored test images using recent non-reference perceptual metrics such as NIQE [51] and BRISQUE [52]. NIQE and BRISQUE better correlate with human perception than PSNR and SSIM. A lower value for these metrics are considered better. The average NIQE values for Chen et al. is , for baseline is and for pseudo-LF. The average BRISQUE values for Chen et al. is , for baseline is and for the proposed pseudo-LF. For exhaustive comparison, please refer to the supplementary material.

Vii Conclusion and Discussion

The primary objective of this work was to enhance LF captured in low-light conditions. We showed that the existing single-frame low-light enhancements methods find it hard to preserve the LF geometry because they reconstruct each LF view independently. To this end, we proposed a low-light L3F dataset and a two-stage L3Fnet architecture. The effectiveness of L3Fnet was shown on LFs for varying levels of low-light by conducting experiments on L3F-20, L3F-50 and L3F-100 datasets. Additionally, we proposed the Histogram Module to automatically tune the amplification factor . With this pre-processing module, L3Fnet could now automatically adapt to different light levels, which was substantiated by showing results on the L3F-wild dataset.

We additionally showed that while single-frame methods are not conceptually suited for LF related tasks, our L3Fnet can be used for decent enhancement of single-frame low-light images also. This was achieved by converting the DSLR images into pseudo-LF and vice-versa. Of course, L3Fnet is better optimized for LF and may be modified in the future to equally suit both LF and DSLR images simultaneously.

As a future work, another interesting direction would be to explore which camera is more suited for low-light reconstruction: a single-frame DSLR camera or a Light Field camera? While DSLR cameras have high resolution, LF cameras may be helpful because of complementary information present in various LF views. Besides, depth estimation is a clear advantage for LF cameras.


  • [1] Y. P. Loh and C. S. Chan, “Getting to Know Low-Light Images with the Exclusively Dark Dataset,” Computer Vision and Image Understanding, vol. 178, pp. 30–42, 2019.
  • [2] E. H. Adelson and J. Y. A. Wang, “Single Lens Stereo with a Plenoptic Camera,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 14, no. 2, pp. 99–106, 1992.
  • [3] R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, P. Hanrahan et al., “Light Field Photography with a Hand-Held Plenoptic Camera,” Computer Science Technical Report CSTR, vol. 2, no. 11, pp. 1–11, 2005.
  • [4] M. Levoy and P. Hanrahan, “Light Field Rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques.    ACM, 1996, pp. 31–42.
  • [5] S. J. Gortler, R. Grzeszczuk, R. Szeliski, and M. F. Cohen, “The Lumigraph,” in Siggraph, vol. 96, no. 30, 1996, pp. 43–54.
  • [6] R. Ng, “Fourier slice photography,” in ACM transactions on graphics (TOG), vol. 24, no. 3.    ACM, 2005, pp. 735–744.
  • [7] H.-G. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y.-W. Tai, and I. So Kweon, “Accurate depth Map Estimation from a Lenslet Light Field Camera,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , 2015, pp. 1547–1555.
  • [8] T.-C. Wang, A. A. Efros, and R. Ramamoorthi, “Occlusion-Aware Depth Estimation using Light-Field Cameras,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3487–3495.
  • [9] M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 673–680.
  • [10] N. K. Kalantari, T.-C. Wang, and R. Ramamoorthi, “Learning-Based View Synthesis for Light Field Cameras,” ACM Transactions on Graphics (TOG), vol. 35, no. 6, p. 193, 2016.
  • [11] S. Nousias, M. Lourakis, and C. Bergeles, “Large-Scale, Metric Structure from Motion for Unordered Light Fields,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3292–3301.
  • [12] C. Jia, F. Shi, Y. Zhao, M. Zhao, Z. Wang, and S. Chen, “Identification of Pedestrians from Confused Planar Objects using Light Field Imaging,” IEEE Access, vol. 6, pp. 39 375–39 384, 2018.
  • [13] T. Li, D. P. Lun, Y.-H. Chan et al., “Robust Reflection Removal based on Light Field Imaging,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1798–1812, 2018.
  • [14] P. Christian and W. Lennart, “Raytrix: Light Field technology,” https://raytrix.de, [Accessed 19-July-2019].
  • [15] X. Guo, Y. Li, and H. Ling, “LIME: Low-Light Image Enhancement via Illumination Map Estimation.” IEEE Trans. Image Processing, vol. 26, no. 2, pp. 982–993, 2017.
  • [16] S. Park, S. Yu, B. Moon, S. Ko, and J. Paik, “Low-light image enhancement using variational optimization-based retinex model,” IEEE Transactions on Consumer Electronics, vol. 63, no. 2, pp. 178–184, 2017.
  • [17] R. Wang, Q. Zhang, C.-W. Fu, X. Shen, W.-S. Zheng, and J. Jia, “Underexposed Photo Enhancement Using Deep Illumination Estimation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 6849–6857.
  • [18] W. Ren, S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M.-H. Yang, “Low-light Image Enhancement via a Deep Hybrid Network,” IEEE Transactions on Image Processing, vol. 28, no. 9, pp. 4364–4375, 2019.
  • [19] C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to See in the Dark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3291–3300.
  • [20]

    K. G. Lore, A. Akintayo, and S. Sarkar, “LLNet: A Deep Autoencoder Approach to Natural Low-Light Image Enhancement,”

    Pattern Recognition, vol. 61, pp. 650–662, 2017.
  • [21] T. Remez, O. Litany, R. Giryes, and A. M. Bronstein, “Deep Convolutional Denoising of Low-Light Images,” arXiv preprint arXiv:1701.01687, 2017.
  • [22] Y. Guo, X. Ke, J. Ma, and J. Zhang, “A Pipeline Neural Network for Low-Light Image Enhancement,” IEEE Access, vol. 7, pp. 13 737–13 744, 2019.
  • [23] T. Plötz and S. Roth, “Benchmarking Denoising Algorithms With Real Photographs,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1586–1595.
  • [24] S. Wanner and B. Goldluecke, “Spatial and Angular Variational Super-Resolution of 4D Light Fields,” in European Conference on Computer Vision.    Springer, 2012, pp. 608–621.
  • [25] Y. Wang, F. Liu, K. Zhang, G. Hou, Z. Sun, and T. Tan, “LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution,” IEEE Transactions on Image Processing, vol. 27, no. 9, pp. 4274–4286, 2018.
  • [26] S. Zhang, Y. Lin, and H. Sheng, “Residual Networks for Light Field Image Super-Resolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 046–11 055.
  • [27] Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, and I. So Kweon, “Learning a Deep Convolutional Network for Light-Field Image Super-Resolution,” in Proceedings of the IEEE international conference on computer vision workshops, 2015, pp. 24–32.
  • [28] J. S. Lumentut, T. H. Kim, R. Ramamoorthi, and I. K. Park, “Fast and Full-Resolution Light Field Deblurring using a Deep Neural Network,” arXiv preprint arXiv:1904.00352, 2019.
  • [29] P. P. Srinivasan, R. Ng, and R. Ramamoorthi, “Light Field Blind Motion Deblurring,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3958–3966.
  • [30] D. Lee, H. Park, I. Kyu Park, and K. Mu Lee, “Joint Blind Motion Deblurring and Depth Estimation of Light Field,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 288–303.
  • [31] K. Mitra and A. Veeraraghavan, “Light Field Denoising, Light Field Superresolution and Stereo Camera Based Refocussing Using a GMM Light Field Patch Prior,” in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.    IEEE, 2012, pp. 22–28.
  • [32] M. Alain and A. Smolic, “Light Field Denoising by Sparse 5D Transform Domain Collaborative Filtering,” in 2017 IEEE 19th International Workshop on Multimedia Signal Processing.    IEEE, 2017, pp. 1–6.
  • [33] D. G. Dansereau, D. L. Bongiorno, O. Pizarro, and S. B. Williams, “Light Field Image Denoising using a Linear 4D Frequency-Hyperfan all-in-focus Filter,” in Computational Imaging XI, vol. 8657.    International Society for Optics and Photonics, 2013, p. 86570P.
  • [34] A. Sepas-Moghaddam, P. L. Correia, and F. Pereira, “Light Field Denoising: Exploiting the Redundancy of an Epipolar Sequence Representation,” in 2016 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), July 2016, pp. 1–4.
  • [35] D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2013, pp. 1027–1034.
  • [36] K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,” IEEE Transactions on image processing, vol. 16, no. 8, pp. 2080–2095, 2007.
  • [37] E. H. Land, “The Retinex Theory of Volor Vision,” Scientific American, vol. 237, no. 6, pp. 108–129, 1977.
  • [38] Z. Ying, G. Li, Y. Ren, R. Wang, and W. Wang, “A New Low-Light Image Enhancement Algorithm using Camera Response Model,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3015–3022.
  • [39] C.-K. Liang and R. Ramamoorthi, “A Light Transport Framework for Lenslet Light Field Cameras,” ACM Transactions on Graphics (TOG), vol. 34, no. 2, p. 16, 2015.
  • [40] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [41] Y. Zhang, K. Li, K. Li, L. Wang, B. Zhong, and Y. Fu, “Image Super-Resolution Using Very Deep Residual Channel Attention Networks,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 286–301.
  • [42] K. Zhang, M. Sun, T. X. Han, X. Yuan, L. Guo, and T. Liu, “Residual Networks of Residual Networks: Multilevel Residual Networks,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 6, pp. 1303–1314, 2017.
  • [43] Q. Yan, D. Gong, Q. Shi, A. v. d. Hengel, C. Shen, I. Reid, and Y. Zhang, “Attention-Guided Network for Ghost-Free High Dynamic Range Imaging,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  • [44] R. Mechrez, I. Talmi, and L. Zelnik-Manor, “The Contextual Loss for Image Transformation with Non-Aligned Data,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 768–783.
  • [45] S. Gu, Y. Li, L. V. Gool, and R. Timofte, “Self-guided network for fast image denoising,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 2511–2520.
  • [46] W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883.
  • [47] A. Buades, B. Coll, and J.-M. Morel, “Image Denoising by Non-Local Averaging,” in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 2.    IEEE, 2005, pp. ii–25.
  • [48] A. Buades, B. Coll, and J. . Morel, “The Staircasing Effect in Neighborhood Filters and its Solution,” IEEE Transactions on Image Processing, vol. 15, no. 6, pp. 1499–1505, June 2006.
  • [49] S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and low-light imaging on mobile cameras,” ACM Transactions on Graphics (TOG), vol. 35, no. 6, p. 192, 2016.
  • [50] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in International Conference on Learning Representations, 2015.
  • [51] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “Completely Blind” Image Quality Analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2012.
  • [52] A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-Reference Image Quality Assessment in the Spatial Domain,” IEEE Transactions on image processing, vol. 21, no. 12, pp. 4695–4708, 2012.