Single Image Reflection Removal Exploiting Misaligned Training Data and Network Enhancements

04/01/2019 ∙ by Kaixuan Wei, et al. ∙ Microsoft Beijing Institute of Technology 4

Removing undesirable reflections from a single image captured through a glass window is of practical importance to visual computing systems. Although state-of-the-art methods can obtain decent results in certain situations, performance declines significantly when tackling more general real-world cases. These failures stem from the intrinsic difficulty of single image reflection removal -- the fundamental ill-posedness of the problem, and the insufficiency of densely-labeled training data needed for resolving this ambiguity within learning-based neural network pipelines. In this paper, we address these issues by exploiting targeted network enhancements and the novel use of misaligned data. For the former, we augment a baseline network architecture by embedding context encoding modules that are capable of leveraging high-level contextual clues to reduce indeterminacy within areas containing strong reflections. For the latter, we introduce an alignment-invariant loss function that facilitates exploiting misaligned real-world training data that is much easier to collect. Experimental results collectively show that our method outperforms the state-of-the-art with aligned data, and that significant improvements are possible when using additional misaligned data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 5

page 6

page 7

page 8

Code Repositories

ERRNet

Single Image Reflection Removal Exploiting Misaligned Training Data and Network Enhancements (CVPR 2019)


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Reflection is a frequently-encountered source of image corruption that can arise when shooting through a glass surface. Such corruptions can be addressed via the process of single image reflection removal (SIRR), a challenging problem that has attracted considerable attention from the computer vision community 

[22, 25, 39, 2, 5, 48, 45, 38]. Traditional optimization-based methods often leverage manual intervention or strong prior assumptions to render the problem more tractable [22, 25]. Recently, alternative learning-based approaches rely on deep Convectional Neural Networks (CNNs) in lieu of the costly optimization and handcrafted priors [5, 48, 45, 38]. But promising results notwithstanding, SIRR remains a largely unsolved problem across disparate imaging conditions and varying scene content.

For CNN-based reflection removal, our focus herein, the challenge originates from at least two sources: (i) The extraction of a background image layer devoid of reflection artifacts is fundamentally ill-posed, and (ii) Training data from real-world scenes, are exceedingly scarce because of the difficulty in obtaining ground-truth labels.

Mathematically speaking, it is typically assumed that a captured image is formed as a linear combination of a background or transmitted layer and a reflection layer , i.e., . Obviously, when given access only to , there exists an infinite number of feasible decompositions. Further compounding the problem is the fact that both and involve content from real scenes that may have overlapping appearance distributions. This can make them difficult to distinguish even for human observers in some cases, and simple priors that might mitigate this ambiguity are not available except under specialized conditions.

On the other hand, although CNNs can perform a wide variety visual tasks, at times exceeding human capabilities, they generally require a large volume of labeled training data. Unfortunately, real reflection images accompanied with densely-labeled, ground-truth transmitted layer intensities are scarce. Consequently, previous learning-based approaches have resorted to training with synthesized images [5, 38, 48] and/or small real-world data captured from specialized devices [48]

. However, existing image synthesis procedures are heuristic and the domain gap may jeopardize accuracy on real images. On the other hand, collecting sufficient additional real data with precise ground-truth labels is tremendously labor-intensive.

This paper is devoted to addressing both of the aforementioned challenges. First, to better tackle the intrinsic ill-posedness and diminish ambiguity, we propose to leverage a network architecture that is sensitive to contextual information, which has proven useful for other vision tasks such as semantic segmentation [11, 49, 47, 13]. Note that at a high level, our objective is to efficiently convert prior information mined from labeled training data into network structures capable of resolving this ambiguity. Within a traditional CNN model, especially in the early layers where the effective receptive field is small, the extracted features across all channels are inherently local. However, broader non-local context is necessary to differentiate those features that are descriptive of the desired transmitted image, and those that can be discarded as reflection-based. For example, in image neighborhoods containing a particularly strong reflection component, accurate separation by any possible method (even one trained with arbitrarily rich training data) will likely require contextual information from regions without reflection. To address this issue, we utilize two complementary forms of context, namely, channel-wise context and multi-scale spatial context. Regarding the former, we apply a channel attention mechanism to the feature maps from convolutional layers such that different features are weighed differently according to global statistics of the activations. For the latter, we aggregate information across a pyramid of feature map scales within each channel to reach a global contextual consistency in the spatial domain. Our experiments demonstrate that significant improvement can be obtained by these enhancements, leading to state-of-the-art performance on two real-image datasets.

Secondly, orthogonal to architectural considerations, we seek to expand the sources of viable training data by facilitating the use of misaligned training pairs, which are considerably easier to collect. Misalignment between an input image and a ground-truth reflection-free version can be caused by camera and/or object movements during the acquisition process. In the previous works [37, 47], data pairs were obtained by taking an initial photo through a glass plane, followed by capturing a second one after the glass has been removed. This process requires that the camera, scene, and even lighting conditions remain static. Adhering to these requirements across a broad acquisition campaign can significantly reduce both the quantity and diversity of the collected data. Additionally, post-processing may also be necessary to accurately align and to compensate for spatial shifts caused by the refractive effect [37]. In contrast, capturing unaligned data is considerably less burdensome, as shown in Fig. 1. For example, there is no need for a tripod, table, or other special hardware; the camera can be hand-held and the pose can be freely adjusted; dynamic scenes in the presence of vehicles, humans, etc. can be incorporated; and finally no post-processing of any type is needed.

To handle such misaligned training data, we require a loss function that is, to the extent possible, invariant to the alignment, i.e., the measured image content discrepancy between the network prediction and its unaligned reference should be similar to what would have been observed if the reference was actually aligned. In the context of image style transfer [17]

and others, certain perceptual loss functions have been shown to be relatively invariant to various transformations. Our study shows that the using only the highest-level feature from a deep network (VGG-19 in our case) leads to satisfactory results for our reflection removal task. In both simulation tests and experiments using a newly collected dataset, we demonstrate for the first time that training/fine-tuning a CNN with unaligned data improves the reflection removal results by a large margin.

[47] Ours
Figure 1: Comparison of the reflection image data collection methods in [47] and this paper.

2 Related Work

This paper is concerned with reflection removal from a single image. Previous methods utilizing multiple input images of, e.g., flash/non-flash pairs [1], different polarization [20], multi-view or video sequences [6, 35, 30, 7, 24, 34, 9, 43, 46] will not be considered here.

Traditional methods. Reflection removal from a single image is a massively ill-posed problem. Additional priors are needed to solve the otherwise prohibitively-difficult problem in traditional optimization-based method [22, 25, 39, 2, 36]. In [22], user annotations are used to guide layer separation jointly with a gradient sparsity prior [23]. [25] introduces a relative smoothness prior where the reflections are assumed to be blurry thus their large gradients are penalized. [39] explores a variant of the smoothness prior where a multi-scale Depth-of-Field (DoF) confidence map is utilized to perform edge classification. [31] exploits the ghost cues for layer separation. [2] proposes a simple optimization formulation with an gradient penalty on the transmitted layer inspired by image smoothing algorithms [42]. Despite decent results can be obtained by these methods where their assumptions hold, the vastly-different imaging conditions and complex scene content in the real world render their generalization problematic.

Deep learning based methods. 

Recently, there is an emerging interest in applying deep convolutional neural networks for single image reflection removal such that the handcrafted priors can be replaced by data-driven learning 

[5, 38, 48, 45]. The first CNN-based method is due to [5], where a network structure is proposed to first predict the background layer in the edge domain followed by reconstructing it the color domain. Later, [38] proposes to predict the edge and image intensity concurrently by two cooperative sub-networks. The recent work of [45] presents a cascade network structure which predicts the background layer and reflection layer in an interleaved fashion. The earlier CNN-based methods typical use the raw image intensity discrepancy such as mean squared error (MSE) to train the networks. Several recent works [48, 16, 3] adopt the perceptual loss [17]

which uses the multi-stage features of a deep network pre-trained on ImageNet 

[29]. [48]. Adversarial loss is investigated in [48, 21] to improve the realism of the predicted background layers.

Figure 2: Overview of our approach for single image reflection removal.

3 Approach

Given an input image

contaminated with reflections, our goal is to estimate a reflection-free trasmitted image

. To achieve this, we train a feed-forward CNN parameterized by to minimize a reflection removal loss function . Given training image pairs , , this involves solving:

(1)

We will first introduce the details of network architecture followed by the loss function applied to both aligned data (the common case) and newly proposed unaligned data extensions. The overall system is illustrated in Fig. 2.

3.1 Basic Image Reconstruction Network

Our starting point can be viewed as the basic image reconstruction neural network component from [5] but modified in three aspects: (1) We simplify the basic residual block [12]

by removing the batch normalization (BN) layer

[14]; (2) we increase the capacity by widening the network from 64 to 256 feature maps; and (3) for each input image , we extract hypercolumn features [10] from a pretrained VGG-19 network [32], and concatenate these features with as an augmented network input. As explained in [48], such an augmentation strategy can help enable the network to learn semantic clues from the input image.

Note that removing the BN layer from our network turns out to be critical for optimizing performance in the present context. As shown in [41]

, if batch sizes become too small, prediction errors can increase precipitously and stability issues can arise. Moreover, for a dense prediction task such as SIRR, large batch sizes can become prohibitively expensive in terms of memory requirements. In our case, we found that within the tenable batch sizes available for reflection removal, BN led to considerably worse performance, including color attenuation/shifting issues as sometimes observed in image-to-image translation tasks

[5, 15, 50]

. BN layers have similarly been removed from other dense prediction tasks such as image super-resolution

[26] or deblurring [28].

At this point, we have constructed a useful base architecture upon which other more targeted alterations will be applied shortly. This baseline, which we will henceforth refer to as BaseNet, performs quite well when trained and tested on synthetic data. However, when deployed on real-world reflection images we found that its performance degraded by an appreciable amount, especially on the 20 real images from [48]. Therefore, to better mitigate the transition from the make-believe world of synthetic images to real-life photographs, we describe two modifications for introducing broader contextual information into otherwise local convolutional filters.

3.2 Context Encoding Modules

As mentioned previously, we consider both context between channels and multi-scale context within channels.

Channel-wise context. The underlying design principle here is to introduce global contextual information across channels, and a richer overall structure within residual blocks, without dramatically increasing the parameter count. One way to accomplish this is by incorporating a channel attention module originally developed in [13] to recalibrate feature maps using global summary statistics.

Let denote original, uncalibrated activations produced by a network block, with feature maps of size of . These activations generally only reflect local information residing within the corresponding receptive fields of each filter. We then form scalar, channel-specific descriptors by applying a global average pooling operator to each feature map

. The vector

represents a simple statistical summary of global, per-channel activations and, when passed through a small network structure, can be used to adaptively predict the relative importance of each channel [13].

More specifically, the channel attention module first computes where is a trainable weight matrix that downsamples to dimension ,

is a ReLU non-linearity,

represents a trainable upsampling weight matrix, and is a sigmoidal activation. Elements of the resulting output vector serve as channel-specific gates for calibrating feature maps via .

Consequently, although each individual convolutional filter has a local receptive field, the determination of which channels are actually important in predicting the transmission layer and suppressing reflections is based on the processing of a global statistic (meaning the channel descriptors computed as activations pass through the network during inference). Additionally, the parameter overhead introduced by this process is exceedingly modest given that and are just small additional weight matrices associated with each block.

Multi-scale spatial context. Although we have found that encoding the contextual information across channels already leads to significant empirical gains on real-world images, utilizing complementary multi-scale spatial information within each channel provides further benefit. To accomplish this, we apply a pyramid pooling module [11], which has proven to be an effective global-scene-level representation in semantic segmentation [49]. As shown in Fig. 2, we construct such a module using pooling operations at sizes 4, 8, 16, and 32 situated in the tail of our network before the final construction of

. Pooling in this way fuses features under four different pyramid scales. After harvesting the resulting sub-region representations, we perform a non-linear transformation (

i.e

. a Conv-ReLU pair) to reduce the channel dimension. The refined features are then upsampled via bilinear interpolation. Finally, the different levels of features are concatenated together as a final representation reflecting multi-scale spatial context within each channel; the increased parameter overhead is negligible.

3.3 Training Loss for Aligned Data

In this section, we present our loss function for aligned training pairs , which consists of three terms similar to previous methods [48, 45].

Pixel loss.  Following [5], we penalize the pixel-wise intensity difference of and via where and are the gradient operator along x- and y-direction, respectively. We set and in all our experiments.

Feature loss.  We define the feature loss based on the activations of the 19-layer VGG network [33] pretrained on ImageNet [29]. Let be the feature from the -th layer of VGG-19, we define the feature loss as where are the balancing weights. Similar to [48], we use the layers ‘conv2_2’, ‘conv3_2’, ‘conv4_2’, and ‘conv5_2’ of VGG-19 net.

Adversarial loss.  We further add an adversarial loss to improve the realism of the produced background images. We define an opponent discriminator network and minimize the relativistic adversarial loss [18] defined as for and for where with

being the sigmoid function and

the non-transformed discriminator function (refer to [18] for details).

To summarize, our loss for aligned data is defined as:

(2)

where we empirically set the weights as , and respectively throughout our experiments.

3.4 Training Loss for Unaligned Data

To use misaligned data pairs for training, we need a loss function that is invariant to the alignment, such that the true similarity between and the prediction

can be reasonably measured. In this regard, we note that human observers can easily assess the similarity of two images even if they are not aligned. Consequently, designing a loss measuring image similarity on the perceptual-level may serve our goal. This motivates us to directly use a deep feature loss for unaligned data.

(a) Input (b) Unaligned Ref. (c) Pretrained
(d) (e) conv2_2 (f) conv3_2
(g) conv4_2 (h) conv5_2 (i) Loss of [27]
Figure 3: The effect of using different loss to handle misaligned real data. (a) and (b) are the unaligned image pair . (c) shows the reflection removal result of our network trained on synthetic data and a small number of aligned real data (see Section 4 for details). Reflection can still be observed in the predicted background image. (d) is the result finetuned on with pixelwise intensity loss. (e)-(h) are the results finetuned with features at different layers of VGG-19. Only the highest-level feature from ‘conv5_2’ yields satisfactory result. (i) shows the results finetuned with the loss of [27]. (Best viewed on screen with zoom)

Intuitively, the deeper the feature, the more likely it is to be insensitive to misalignment. To experimentally verify this and find a suitable feature layer for our purposes, we conducted tests using a pre-trained VGG-19 network as follows. Given an unaligned image pair , we use gradient descent to finetune the weights of our network to minimize the feature difference of and

, with features extracted at different layers of VGG-19. Figure 

3 shows that using low-level or middle-level features from ‘conv2_2’ to ‘conv4_2’ leads to blurry results (similar to directly using a pixel-wise loss), although the reflection is more thoroughly removed. In contrast, using the highest-level feature from ‘conv5_2’ gives rise to a striking result: the predicted background image is sharp and almost reflection-free.

Recently, [27] introduced a “contextual loss” which is also designed for training deep networks with unaligned data for image-to-image translation tasks like image style transfer. In Fig 3, we also present the finetuned result using this loss for our reflection removal task. Upon visual inspection, the results are similar to our highest-level VGG feature loss (quantitative comparison can be found in the experiment section). However, our adopted loss (formally defined below) is much simpler and more computationally efficient than the loss from [27].

Alignment-invariant loss.  Based on the above study, we now formally define our invariant loss component designed for unaligned data as , where denotes the ‘conv5_2’ feature of the pretrained VGG-19 network. For unaligned data, we also apply an adversarial loss which is not affected by misalignment. Therefore, our overall loss for unaligned data can be written as

(3)

where we set the weights as and .

4 Experiments

4.1 Implementation Details

Training data. We adopt a fusion of synthetic and real data as our train dataset. The images from [5] are used as sythetic data, i.e. 7,643 cropped images with size from PASCAL VOC dataset [4]. 90 real-world training images from [48] are adopted as real data. For image synthesis, we use the same data generation model as [5] to create our synthetic data. In the following, we always use the same dataset for training, unless specifically stated.

Training details. Our implementation111Code is released on https://github.com/Vandermode/ERRNet

is based on PyTorch. We train the model with 60 epoch using the Adam optimizer

[19]. The base learning rate is set to and halved at epoch 30, then reduced to at epoch 50. The weights are initialized as in [26].

4.2 Ablation Study

In this section, we conduct an ablation study for our method on 100 synthetic testing images from [5] and 20 real testing images from [48] (denoted by ‘Real20’).

Component analysis.  To verify the importance of our network design, we compare four model architectures as described in Section 3, including (1) Our basic image reconstruction network BaseNet; (2) BaseNet with channel-wise context module (BaseNet + CWC); (3) BaseNet with multi-scale spatial context module (BaseNet + MSC); and (4) Our enhanced reflection removal network, denoted ERRNet, i.e., BaseNet + CWC + MSC. The result from the CEILNet [5] fine-tuned on our training data (denoted by CEILNet-F) is also provided as an additional reference.

As shown in Table 1, our BaseNet has already achieved a much better result than CEILNet-F. The performance of our BaseNet could be obviously boosted by using channel-wise context and multi-scale spatial context modules, especially by using them together, i.e. ERRNet. Figure 4 visually shows the results from BaseNet and our ERRNet. It can be observed that BaseNet struggles to discriminate the reflection region and yields some obvious residuals, while the ERRNet removes the reflection and produces much cleaner transmitted images. These results suggest the effectiveness of our network design, especially the components tailored to encode the contextual clues.

Input BaseNet ERRNet
Figure 4: Comparison of the results with (ERRNet) and without (BaseNet) the context encoding modules.
Synthetic Real20
Model PSNR SSIM PSNR SSIM
CEILNet-F [5] 24.70 0.884 20.32 0.739
BaseNet only 25.71 0.926 21.51 0.780
BaseNet + CSC 27.64 0.940 22.61 0.796
BaseNet + MSC 26.03 0.928 21.75 0.783
ERRNet 27.88 0.941 22.89 0.803
Table 1: Comparison of different settings. Our full model (i.e. ERRNet) leads to best performance among all comparisons.

Efficacy of the training loss for unaligned data.  In this experiment, we first train our ERRNet with only ‘synthetic data’, ‘synthetic + 50 aligned real data’, and ‘synthetic + 90 aligned real data’. The loss function in Eq. (2) is used for aligned data. We can see that the testing results become better with the increasing real data in Table 2.

Then, we synthesize misalignment through performing random translations within pixels on real data222Our alignment-invariant loss can handle shifts of up to 20 pixels. See suppl. material for more details., and train ERRNet with ‘synthetic + 50 aligned real data + 40 unaligned data’. Pixel-wise loss and alignment-invariant loss are used for 40 unaligned images. Table 2 shows employing 40 unaligned data with loss degrades the performance, even worse than that from 50 aligned images without additional unaligned data.

In addition, we also investigate the contextual loss of [27]. Results from both contextual loss and our alignment-invariant loss (or combination of them ) surpass analogous results obtained with only aligned images by appreciable margins, indicating that these losses provide useful supervision to networks granted unaligned data. Note although and perform equally well, our is much simpler and computationally efficient than , suggesting is lightweight alternative to in terms of our reflection removal task.

Training Scheme PSNR SSIM
Synthetic only 19.79 0.741
+ 50 aligned 22.00 0.785
+ 90 aligned 22.89 0.803
+ 50 aligned, + 40 unaligned trained with:
21.85 0.766
22.38 0.797
22.47 0.796
+ 22.43 0.796
Table 2: Simulation experiment to verify the efficacy our alignment-invariant loss
Input LB14 [25] CEILNet-F [5] Zhang et al. [48] BDN-F [45] ERRNet Reference
Figure 5: Visual comparison on real-world images. The images are obtained from ‘Real20’ (Rows 1-3) and our collected unaligned dataset (Rows 4- 6). More results can be found in the suppl. material.
Dataset Index Methods
Input LB14 CEILNet CEILNet Zhang BDN BDN ERRNet
[25] [5] F et al. [48] [45] F
Real20 PSNR
SSIM
NCC
LMSE
Objects PSNR
SSIM
NCC
LMSE
Postcard PSNR
SSIM
NCC
LMSE
Wild PSNR
SSIM
NCC
LMSE
Average PSNR
SSIM
NCC
LMSE
Table 3: Quantitative results of different methods on four real-world benchmark datasets. The best results are indicated by red color and the second best results are denoted by blue color. The results of ‘Average’ are obtained by averaging the metric scores of all images from these four real-world datasets.
Figure 6: Image samples in our unaligned image dataset. Our dataset covers a large variety of indoor and outdoor environments including dynamic scenes with vehicles, human, etc.

4.3 Method Comparison on Benchmarks

In this section, we compare our ERRNet against state-of-the-art methods including the optimization-based method of [25] (LB14) and the learning-based approaches (CEILNet [5], Zhang et al. [48], and BDN [45]). For fair comparison, we finetune these models on our training dataset and report results of both the original pretrained model and finetuned version (denoted with a suffix ’-F’).

The comparison is conducted on four real-world datasets, i.e. 20 testing images in [48] and three sub-datasets from SIR [37]. These three sub-datasets are captured under different conditions: (1) 20 controlled indoor scenes composed by solid objects; (2) 20 different controlled scenes on postcards; and (3) 55 wild scenes333Images indexed by 1, 2, 74 are removed due to misalignment. with ground truth provided. In the following, we denote these datasets by ‘Real20’, ‘Objects’, ‘Postcard’, and ‘Wild’, respectively.

Table 3 summarizes the results of all competing methods on four real-world datasets. The quality metrics include PSNR, SSIM [40], NCC [44, 37] and LMSE [8]. Larger values of PSNR, SSIM, and NCC indicate better performance, while a smaller value of LMSE implies a better result. Our ERRNet achieves the state-of-the-art performance in ‘Real20’ and ‘Objects’ datasets. Meanwhile, our result is comparable to the best-performing BDN-F on ‘Postcard’ data. The quantitative results on ‘Wild’ dataset reveal a frustrating fact, namely, that no method could outperform the naive baseline ’Input’, suggesting that there is still large room for improvement.

Figure 5 displays visual results on real-world images. It can be seen that all compared methods fail to handle some strong reflections, but our network more accurately removes many undesirable artifacts, e.g. removal of tree branches reflected on the building window in the fourth photo of Fig 5.

Score Range Ratio BDN-F ERRNet
78% 54%
18% 36%
4% 10%
Average Score 0.62 0.51
Table 4: Human preference scores of self-comparsion experiments. Left: results of BDN-F; Right: results of ERRNet. X axis of each sub-figure represents the image # of testing images (50 in total).
BDN-F ERRNet
input reference w/o unaligned w. unaligned w/o unaligned w. unaligned
Figure 7: Results of training with and without unaligned data. See suppl. material for more examples. (Best view on screen with zoom)

4.4 Training with Unaligned Data

To test our alignment-invariant loss on real-world unaligned data, we first collected a dataset of unaligned image pairs with cameras and a portable glass, as shown in Fig. 1 . Both a DSLR camera and a smart phone are used to capture the images. We collected 450 image pairs in total, and some samples are shown in Fig 6. These image pairs are randomly split into a training set of 400 samples and a testing set with 50 samples.

We conduct experiments on the BDN-F and ERRNet models, each of which is first trained on aligned dataset (w/o unaligned) as in Section 4.3, and then finetuned with our alignment-invariant loss and unaligned training data. The resulting pairs before and after finetuning are assembled for human assessment, as no existing numerical metric is available for evaluating unaligned data.

We asked 30 human observers to provide a preference score among {-2,-1,0,1,2} with 2 indicating the finetuned result is significantly better while -2 the opposite. To avoid bias, we randomly switch the image positions of each pair. In total, human judgments are collected (2 methods, 30 users, 50 images pairs). More details regarding this evaluation process can be found in the suppl. material.

Table 4 shows the average of human preference scores for the resulting pairs of each method. As can be seen, human observers clearly tend to prefer the results produced by the finetuned models over the raw ones, which demonstrates the benefit of leveraging unaligned data for training independent of the network architecture. Figure 7 shows some typical results of the two methods; the results are significantly improved by training on unaligned data.

5 Conclusion

We have proposed an enhanced reflection removal network together with an alignment-invariant loss function to help resolve the difficulty of single image reflection removal. We investigated the possibility to directly utilize misaligned training data, which can significantly alleviate the burden of capturing real-world training data. To efficiently extract the underlying knowledge from real training data, we introduce context encoding modules, which can be seamlessly embedded into our network to help discriminate and suppress the reflection component. Extensive experiments demonstrate our approach set a new state-of-the-art on real-world benchmarks of single image reflection removal, both quantitatively and visually.

References

  • [1] A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li. Removing photography artifacts using gradient projection and flash-exposure sampling. ACM Transactions on Graphics (TOG), 24(3):828–835, 2005.
  • [2] N. Arvanitopoulos, R. Achanta, and S. Susstrunk. Single image reflection suppression. In

    The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    , July 2017.
  • [3] Z. Chi, X. Wu, X. Shu, and J. Gu. Single image reflection removal using deep encoder-decoder network. arXiv preprint arXiv:1802.00094, 2018.
  • [4] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
  • [5] Q. Fan, J. Yang, G. Hua, B. Chen, and D. Wipf. A generic deep architecture for single image reflection removal and image smoothing. 2017.
  • [6] H. Farid and E. H. Adelson.

    Separating reflections and lighting using independent components analysis.

    In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 262–267, 1999.
  • [7] K. Gai, Z. Shi, and C. Zhang. Blind separation of superimposed moving images using image statistics. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 34(1):19–32, 2012.
  • [8] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman. Ground truth dataset and baseline evaluations for intrinsic image algorithms. In Computer Vision, 2009 IEEE 12th International Conference on, pages 2335–2342. IEEE, 2009.
  • [9] X. Guo, X. Cao, and Y. Ma. Robust separation of reflection from multiple images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2187–2194, 2014.
  • [10] B. Hariharan, P. Arbelaez, R. Girshick, and J. Malik. Hypercolumns for object segmentation and fine-grained localization. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [11] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 37(9):1904–1916, 2015.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
  • [13] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
  • [15] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros.

    Image-to-image translation with conditional adversarial networks.

    In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [16] M. Jin, S. Susstrunk, and P. Favaro. Learning to see through reflections. In IEEE International Conference on Computational Photography, 2018.
  • [17] J. Johnson, A. Alahi, and L. Feifei. Perceptual losses for real-time style transfer and super-resolution. European Conference on Computer Vision (ECCV), pages 694–711, 2016.
  • [18] A. Jolicoeurmartineau. The relativistic discriminator: a key element missing from standard gan. arXiv: Learning, 2018.
  • [19] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
  • [20] N. Kong, Y.-W. Tai, and J. S. Shin. A physically-based approach to reflection separation: from physical modeling to constrained optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(2):209–221, 2014.
  • [21] D. Lee, M.-H. Yang, and S. Oh. Generative single image reflection separation. arXiv preprint arXiv:1801.04102, 2018.
  • [22] A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a sparsity prior. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 29(9):1647–1654, 2007.
  • [23] A. Levin, A. Zomet, and Y. Weiss. Learning to perceive transparency from the statistics of natural scenes. In Advances in Neural Information Processing Systems (NIPS), pages 1271–1278, 2003.
  • [24] Y. Li and M. S. Brown. Exploiting reflection change for automatic reflection removal. In IEEE International Conference on Computer Vision (ICCV), pages 2432–2439, 2013.
  • [25] Y. Li and M. S. Brown. Single image layer separation using relative smoothness. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2752–2759, 2014.
  • [26] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee. Enhanced deep residual networks for single image super-resolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
  • [27] R. Mechrez, I. Talmi, and L. Zelnik-Manor. The contextual loss for image transformation with non-aligned data. In The European Conference on Computer Vision (ECCV), September 2018.
  • [28] S. Nah, T. Hyun Kim, and K. Mu Lee. Deep multi-scale convolutional neural network for dynamic scene deblurring. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [29] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
  • [30] B. Sarel and M. Irani. Separating transparent layers through layer information exchange. In European Conference on Computer Vision (ECCV), pages 328–341, 2004.
  • [31] Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman. Reflection removal using ghosting cues. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
  • [32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.
  • [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In

    International Conference on Machine Learning (ICLR)

    , 2015.
  • [34] S. N. Sinha, J. Kopf, M. Goesele, D. Scharstein, and R. Szeliski. Image-based rendering for scenes with reflections. ACM Transactions on Graphics (TOG), 31(4):100–1, 2012.
  • [35] R. Szeliski, S. Avidan, and P. Anandan. Layer extraction from multiple images containing reflections and transparency. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 246–253, 2000.
  • [36] R. Wan, B. Shi, L. Y. Duan, A. H. Tan, W. Gao, and A. C. Kot. Region-aware reflection removal with unified content and gradient priors. IEEE Transactions on Image Processing, PP(99):1–1, 2018.
  • [37] R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. Benchmarking single-image reflection removal algorithms. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [38] R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. Crrn: Multi-scale guided concurrent reflection removal network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [39] R. Wan, B. Shi, T. A. Hwee, and A. C. Kot. Depth of field guided reflection removal. In Image Processing (ICIP), 2016 IEEE International Conference on, pages 21–25. IEEE, 2016.
  • [40] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004.
  • [41] Y. Wu and K. He. Group normalization. In European Conference on Computer Vision (ECCV), 2018.
  • [42] L. Xu, C. Lu, Y. Xu, and J. Jia. Image smoothing via gradient minimization. In ACM Transactions on Graphics (TOG), volume 30, page 174, 2011.
  • [43] T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. A computational approach for obstruction-free photography. ACM Transactions on Graphics (TOG), 34(4):79, 2015.
  • [44] T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. A computational approach for obstruction-free photography. ACM Transactions on Graphics (Proc. SIGGRAPH), 34(4), 2015.
  • [45] J. Yang, D. Gong, L. Liu, and Q. Shi. Seeing deeply and bidirectionally: A deep learning approach for single image reflection removal. In European Conference on Computer Vision (ECCV), 2018.
  • [46] J. Yang, H. Li, Y. Dai, and R. T. Tan. Robust optical flow estimation of double-layer images under transparency or reflection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1410–1419, 2016.
  • [47] H. Zhang, K. Dana, J. Shi, Z. Zhang, X. Wang, A. Tyagi, and A. Agrawal. Context encoding for semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [48] X. Zhang, R. Ng, and Q. Chen. Single image reflection separation with perceptual losses. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
  • [49] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia. Pyramid scene parsing network. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6230–6239, 2017.
  • [50] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.