Generative Single Image Reflection Separation

01/12/2018 ∙ by Donghoon Lee, et al. ∙ University of California, Merced Seoul National University 0

Single image reflection separation is an ill-posed problem since two scenes, a transmitted scene and a reflected scene, need to be inferred from a single observation. To make the problem tractable, in this work we assume that categories of two scenes are known. It allows us to address the problem by generating both scenes that belong to the categories while their contents are constrained to match with the observed image. A novel network architecture is proposed to render realistic images of both scenes based on adversarial learning. The network can be trained in a weakly supervised manner, i.e., it learns to separate an observed image without corresponding ground truth images of transmission and reflection scenes which are difficult to collect in practice. Experimental results on real and synthetic datasets demonstrate that the proposed algorithm performs favorably against existing methods.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 5

page 6

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Single image reflection separation aims to separate an observed image into a transmitted scene and a reflected scene. When two scenes are separated adequately, existing computer vision algorithms can better understand each scene since an interference of the other one is decreased. As there are various objects that may reflect surroundings, such as windows, glass, or ponding water, this is an important problem to the computer vision community.

Figure 1:

The proposed algorithm is able to separate an observed image into a transmitted scene and a reflected scene. Convolutional neural networks are trained to generate both scenes conditioned on the category of two scenes based on adversarial learning. Unlike previous approaches, a reflected scene can also be predicted reasonably. The ground truth of each scene is shown in red boxes.

Although this problem has been studied for decades [3], it is still a challenging problem due to several reasons. First, it is an ill-posed problem as we need to infer two scenes based on an observed image. Numerous methods have been proposed to address this issue by making certain assumptions to make the problem tractable. However, the assumptions have been limited to specific cases and are not applicable to real-world images in general [27]. For example, one of the mainstream approaches assumes that the edge distribution between the transmitted scene and reflected scene is different, i.e., the former tends to have sharper edges while the latter is relatively blurred [2, 16, 4]. This type of blur occurs when the reflected scene is outside of the depth of field of the camera. However, cameras in recent years, such as the ones on smartphones, have small aperture and deep depth of field. Consequently, an observed image contains sharp edges of both scenes although the reflected scene is not in the same depth of the transmitted scene. In other words, the blur assumption does not hold in practice.

Second, it is difficult to obtain ground-truth tuples of an observed scene , a transmitted scene , and a reflected scene as we cannot simply get rid of the obstructing glass and take pictures of two scenes in general circumstances111Note that a transmitted scene and a reflected scene represent a scene before the transmission and reflection, respectively.. Therefore, it has been assumed that an observed image can be synthesized by combining two known images based on the physical model of the transmission and reflection. However, the exact physical model is unknown as it is complex and requires various factors, such as the thickness of glass, surface conditions, and angle of incidence, which are typically not at our disposal [25]. As a consequence, simple approximations are used to model the reflection which limits the separation performance. For example, one of the most common approximated models is , where is a scalar weight. However, it does not consider changes in and due to the glass.

Third, conventional approaches mainly focus on reflection removal instead of reflection separation [9, 2, 28, 29]. Although these methods aim to suppress reflection artifacts to restore the transmitted scene, the contents of the reflection are usually not considered. While it may be difficult to reconstruct both scenes, it is important to infer both scenes jointly as they are entangled in a single observation. Furthermore, the recovered reflected scene itself may be useful for various applications such as surveillance and image understanding.

To address above issues, we first propose to use a new assumption on the observed scene: the category of the transmitted and reflected scenes are known. It is a valid assumption in practice since we take a picture of an interested object or scene while knowing which is being reflected. Likewise, other ill-posed problems such as deblurring and super-resolution often make similar assumptions, e.g., in the cases of faces, text, and night scenes 

[18, 19, 10, 31]. As such, we demonstrate that it is not necessary to make other assumptions, e.g., a blurry reflected scene. In addition, our algorithm does not rely on a certain approximation model of the reflection as it may restrict the algorithm to a specific case. Instead, we leverage the fact that an observed image contains contents of the transmitted scene and reflected scene. It leads us to model an observed image using a feature space instead of a pixel-level combination.

Based on the above assumption, we pose the reflection separation problem as a conditional image generation task (see Figure 1). Although image generation is a difficult problem, notable success has been achieved that transforms an input image into other domains or styles [11, 7, 33]. In this work, we use generative adversarial networks (GAN) [8] to infer two scenes jointly based on a novel network architecture. By generating images, the proposed algorithm makes a key difference from previous methods of reflection separation as plausible reflected scenes can be obtained. Furthermore, the network can be trained in a weakly-supervised manner, i.e., only labels of transmitted and reflected scenes are needed. It enables us to train the network using real data without tedious effort gathering corresponding ground truths for and .

We carry out experiments on real data collected from the internet and synthetic data based on the Places dataset [32] which consists of 8 million images of 365 scenes. For both datasets, we evaluate the proposed algorithm against the state-of-the-art methods for single image reflection separation. Quantitative and qualitative results show that the proposed algorithm suppresses reflection artifacts on the transmitted scene and infers the reflected scene properly.

2 Related Work

As reflection separation is an ill-posed problem, additional information or assumptions are needed to make the problem tractable. In earlier methods, multiple images of a target scene taken under different conditions are used. For example, focus/defocus pairs [22], flash/non-flash pairs [1], or different polarization angle pairs [23, 12] are utilized. For videos, it is possible to decorrelate the motion between the transmitted scene and reflected scene [21, 26, 6]. However, it may be difficult to apply these methods in practice since multiple images captured from controlled experimental setups are not always available.

Reflection separation using a single image has recently attracted increasing attention due to its practical importance, although the problem is more difficult than multiple-image cases. User annotations can guide the separation by formulating it as a constrained optimization problem which relies on a sparse gradient prior of natural images [13].

For automatic single image reflection separation, existing methods focus on each of the following three different conditions. First, the depth of field (DoF) of a lens is shallow and the image is focused on a transmitted scene. It makes out-of-focus blur for a reflected scene when the distance from the window is not the same as the transmitted scene. Thus, blurry edges become useful cues for reflection separation. Wan et al[28] propose a pixel-wise DoF confidence map obtained by a multi-scale search. In [30]

, a method based on a Markov random field and expectation maximization is proposed to filter out weak edges. The energy function of a Markov random field is composed of gradient profile sharpness and spatial smoothness of edges. A recent optimization based approach

[2] uses a Laplacian data fidelity term and an prior term to suppress reflections. Fan et al[4] propose a two-step deep architecture based on convolutional neural networks. Given an input image and an edge map, it first predicts edges of a target scene and then reconstructs the target scene based on the input image and predicted edges.

(a) Baseline 1
(b) Baseline 2
(c) Baseline 3
(d) Proposed network
Figure 2: Proposed network architecture and baseline models for reflection separation. We use color coding to indicate weight sharing. Note that U-net architecture and discriminators are omitted for better visualization. (a) Two separated generators for each of transmitted and reflected scenes. (b) Sharing weights through a reconstruction branch. (c) Additionally comparing contents of images using feature maps of a shared encoder. (d) Predicting a mask map for a transmitted scene increases the flexibility of a network for handling real images.

Second, the DoF is deep and covers both transmitted and reflected scenes. Thus, the observed image contains sharp edges from both scenes. While this happens frequently, it is the most challenging issue due to the lack of visual clues for separation. Levin et al[14, 15] rely on a prior that gradient and local features of natural images are statistically sparse [5]. However, it is difficult to apply above methods when textures of an image become complex.

Third, the DoF is deep and glass is relatively thick. Then, a ghost effect of the reflected scene is caused by light rays that penetrate the outer surface of the glass but reflect on the inner surface. Shih et al[24]estimate a ghost kernel and use a Gaussian mixture model to separate the scenes. However, it is not applicable to other types of reflections that do not have notable ghost effects.

3 Proposed Algorithm

Layer # Filter Filter Size Stride Pad BN
Conv. 1 32 2 2
Conv. 2 64 2 2 {, }
Conv. 3 128 2 2 {, }
Conv. 4 256 2 2 {, }
Conv. 5 256 2 2 {, }
Conv. 6 {128,1} 1 0
(a) Details of the {encoding, discriminator} network
Layer # Filter Filter Size Stride Pad BN
Conv. 1 1 0
F-Conv. 2 256 1/2 -
F-Conv. 3 256 1/2 -
F-Conv. 4 128 1/2 -
F-Conv. 5 64 1/2 -
F-Conv. 6 32 1/2 -
(b) Details of the {mask prediction, generation} network
Table 1:

Details of each network. # Filter is the number of filters. BN is the batch normalization. Conv denotes a convolutional layer. F-Conv denotes a transposed convolutional layer that uses the fractional-stride.

Figure 2 shows design options of network architectures for a single image reflection separation. From baseline models to the proposed network, we discuss the limitations and remedies of each model. All networks basically have two generation branches; one for a transmitted scene and the other for a reflected scene. For realistic generations, we apply adversarial losses [8] using discriminators. Let and denote pairs of a generator and a discriminator for transmission and reflection branches, respectively. Then, an adversarial loss for the transmission branch is defined as follows:

(1)

An adversarial loss for the reflection branch is defined in a similar way. The network architecture is described in Table 1.

3.1 Reflection Separation using Synthetic Images

This approach trains a network to separate synthesized images and evaluate on real images. We first describe our synthesis schemes and then discuss network designs.

Preparing training images. For training, is synthesized using two known images and , and a synthesis model. We consider three state-of-the-art synthesis models [2, 24, 4]:

  • ,

  • ,

  • [4],

where is a scalar weight, is a blurring kernel, is a ghost kernel, and denotes a convolution operation. These models assume blurry edges or ghost effects of a reflected scene to make the problem tractable. We aim to drop such assumptions and thus two more synthesis models are added, i.e., a linear model and a clipping model without blurring step. In addition, we change the ghost model as for training since the model in [24] does not define and as the original scenes before the transmission and reflection222For the original ghost model, is outside of when and since ..

There are parameters in each model, e.g., in the blurring model. For flexibility and generalization ability, we synthesize images using random parameters for each mini-batch. For example, we pick uniformly at random while other methods use one or two fixed values [2, 16]. In addition, we perform random left-right flipping and cropping for data augmentation. To the best of our knowledge, this is the first work that deals with these diverse reflection models.

Figure 3: Examples of reflection separation results using baseline networks in Figure 2. Best viewed in color with a digital zoom.

Network design for synthetic image reflection separation. The first baseline model simply maps an input image into two domains using an encoder and two domain-specific decoders based on the U-net architecture [20] as shown in Figure 2

(a). The loss function for this network is an extension of

[11] as follows:

(2)

where controls relative importance of objectives. It asks generators to not only fool discriminators but also to be similar to ground truth images. However, this simple extension renders numerous artifacts and fails to separate two scenes suitably, as shown in Figure 3. We attribute this to a lack of communication between two generation branches. As two images should be generated jointly in our problem, we need extra channels to communicate with each other.

To address this issue, we add a reconstruction branch, , and share weights of the first few layers with generation branches as shown in Figure 2(b). In this case, the loss function is:

(3)

Note that this is different from the weight sharing scheme of [17]: early weights of two generation branches are shared (i.e., semantics between two domains are shared). On the other hand, in our problem, the semantics are often not shared between a transmitted scene and a reflected scene. Instead, both scenes share semantics with an observed image. Thus, by putting a reconstruction branch in the middle, all networks can communicate with each other during generation. This approach helps decrease artifacts and separates the scene better as shown in Figure 3.

The third model is shown in Figure 2(c). It aims to make use of high-level information in addition to the pixel-level appearance to assess generated images. In this paper, we minimize the difference of contents between generated images and the input image. A straightforward solution is to put a constraint that generated images should reconstruct an input image as faithfully as possible using the synthesis model. However, it is a limited approach since the real synthesis models are unknown or non-differentiable, e.g., [4]. To address this issue, we compare feature maps of the input and generated images. In Figure 2(c), we introduce a new variable which estimates the ratio of contents between a transmitted scene and reflected scene in . Using this parameter, a content loss between the observed scene and generated scenes is defined as follows:

(4)

where and denote a feature map and a volume of the -th layer of the encoder, respectively. It allows us to train the network for arbitrary synthesis models. The loss for the network is defined as follows:

(5)

where are used for all experiments.

3.2 Reflection Separation using Real Images

In this section, we describe how to train a network using real observations without ground truth images of the transmitted scene and reflected scene. Note that the discussed networks so far rely on synthetic images since it is difficult to obtain ground truths of corresponding and . Even in the recent attempt for data collection [27], the original scene before reflection cannot be obtained. It limits the performance of reflection separation due to the difference between the real observation and synthesized images.

Figure 4: Examples of cafe images taken from the outside. Part of the cafe interior is not visible due to the reflection.

One of the largest gaps between real and synthesized images is due to the combination weight in the synthesis model. For real images, reflected scenes are often dominant in certain regions of an observed image where transmitted signals are unrecognizable and can only be guessed. For example, as shown in Figure 4, cafe interiors are not visible due to reflected buildings. However, state-of-the-art synthesis methods use a scalar to combine two scenes. To alleviate this issue, we predict a mask map using a new branch instead of a scalar as shown in Figure 2(d). For each location, its value is between which represents a confidence score that the pixel belongs to a transmitted scene. As such, we have and , where denotes pixel-wise multiplication and and are defined for notational simplicity. The confidence map prediction branch and other generation branches are trained iteratively.

The network can be trained in a weakly supervised manner, i.e., tuples of are given where and are images belonging to the same categories of transmitted and reflected scenes included in , respectively. In this case, and are not conditioned on . Thus, (1) is changed to:

(6)

In addition, a loss function in (5

) should be changed for weakly supervised learning. By combining all the losses, the overall loss function becomes

(7)

It only uses and to compute the adversarial loss which does not need the exact ground truth data of . Note that it allows us to learn reflection separation problem without any approximations for synthetic images.

4 Experimental Results

Figure 5: Examples of the Places dataset. It has a large number of categories and images in the same category are fairly diverse.

We first describe the experimental settings. We use the Places dataset which contains 8 million images of 365 scenes to synthesize images. As shown in Figure 5, it has diverse images that cover large variations in the lighting condition, viewpoint, distance to the scene, and a number of objects in the scene. For each experiment, two scenes are selected to synthesize training images. For real images, we collect 178 photos of a cafe taken from the outside, from the Internet. As shown in Figure 4, the images contain challenging reflections.

(a) Input
(b)
(c)
(d) [2]
(e) [4]
(f) [4]
(g) Real
(h) Real
Figure 6: Single image reflection separation results on synthetic images of the Places dataset. In this experiment, images are synthesized based on the clipping model [4]. For every two rows, we show results with and without the blurry assumption for the same input pair.
Transmission Reflection
Dataset Physical model [16] [24] [2] [4] Ours [16] [24] [2] [4] Ours
: Airfield, : Hotel room [24] PSNR 14.4 13.1 17.3 17.4 18.2 10.1 10.2 - 7.28 18.9
SSIM 0.34 0.26 0.42 0.44 0.45 0.10 0.14 - 0.06 0.48
[2] PSNR 15.8 - 19.9 20.1 26.3 11.8 - - 8.0 18.0
SSIM 0.47 - 0.58 0.61 0.80 0.17 - - 0.11 0.37
[2] without blur assumption PSNR 13.8 - 17.4 17.4 21.5 10.0 - - 7.2 17.5
SSIM 0.31 - 0.41 0.42 0.63 0.10 - - 0.05 0.45
[4] PSNR 15.1 - 15.9 18.1 26.1 9.95 - - 8.1 17.5
SSIM 0.49 - 0.53 0.59 0.81 0.10 - - 0.12 0.35
[4] without blur assumption PSNR 13.6 - 15.1 15.6 22.8 9.67 - - 7.3 17.5
SSIM 0.40 - 0.49 0.50 0.68 0.09 - - 0.11 0.41
: Skyscraper, : Conference room [24] PSNR 14.5 14.5 17.6 17.4 21.1 9.80 10.8 - 7.30 16.5
SSIM 0.39 0.34 0.45 0.46 0.62 0.10 0.11 - 0.02 0.37
[2] PSNR 15.3 - 19.2 19.9 24.5 10.9 - - 8.4 16.4
SSIM 0.48 - 0.60 0.64 0.76 0.11 - - 0.10 0.31
[2] without blur assumption PSNR 14.3 - 17.0 16.7 20.4 9.70 - - 7.1 16.8
SSIM 0.35 - 0.42 0.41 0.57 0.08 - - 0.01 0.40
[4] PSNR 14.6 - 15.3 17.2 23.1 10.0 - - 8.1 15.7
SSIM 0.46 - 0.51 0.58 0.73 0.07 - - 0.09 0.28
[4] without blur assumption PSNR 12.9 - 14.5 15.6 20.7 9.24 - - 7.3 16.3
SSIM 0.37 - 0.45 0.50 0.62 0.07 - - 0.11 0.37
Table 2: Quantitative results of single image reflection separation on the Places dataset. For comparison, we run experiments using each author’s publicly available implementation. Note that [24] does not work on other physical models and [2] does not provide a reflection scene as an output.
(a) Airfield and beach
(b) Beach and street
(c) Beach and hotel room
(d) Sky and hotel room
Figure 7: The proposed network is trained on airfield and hotel room and then tested on other categories. Other categories are completely not fed to the network during training.

For training, networks shown in Figure 2(b), Figure 2(c), and Figure 2(d) share weights of the first three layers in the generator. For each mini-batch, we first resize images into pixels. A rectangular patch is cropped with a height and a width sampled from uniformly at random. Then, it is resized to pixels for training. For the blurring model, we pick and of the Gaussian kernel between . More quantitative and qualitative results are presented in the supplementary material. The source code will be made available to the public.

4.1 Synthetic Images

In this section, we show reflection separation results using a network trained on synthetic data. We evaluate the proposed algorithm against the state-of-the-art methods [4, 2, 24]. As they use different approximation methods to model the reflection, we provide results for all cases.

Figure 6 shows reflection separation results when the clipping model [4] is used to synthesize images. We alternatively use the blur assumption for each row. In most cases, the proposed network in Figure 2(c) successfully removes reflection artifacts on the transmitted scene and recovers the reflected scene reasonably. In contrast, other methods generate unclear images of transmitted scenes and barely recognizable reflected scenes. When the blurry assumption is not used, all other methods fail to suppress reflections. In addition, they are sensitive to the synthesis model. The results show that other methods are designed specifically based on their assumptions. On the other hand, the proposed algorithm performs favorably in all cases. Table 2 shows that the proposed algorithm performs favorably against other methods quantitatively.

A network trained with two categories is evaluated for different types of scenes. Figure 7 shows the results when the proposed network is trained on airfield and hotel room categories and then tested on various scenes using the blurring model [2]. When a category of the reflected scene is changed as shown in Figure 7(a), or both are changed as shown in Figure 7(b), the transmitted scene is well restored while the reflected scene is not realistic. On the other hand, when the transmitted scene is changed as shown in Figure 7(c) and Figure 7(d), both scenes are recovered properly. These results indicate that the transmission branch learns how to remove blurry regions and generate the image based on the context of sharp regions while the reflection branch focuses on recovering the original scene from a blurry observation. However, as presented in the supplementary material, the network is not able to separate reflections on real images. The state-of-the-art methods based on the same synthesis models also fail to separate reflections. The findings are consistent with the observation in [27] that existing methods do not perform well on real images. In addition, we put randomness while synthesizing training images to increase the generalization ability of the network as mentioned before. We attribute this failure to a non-realistic synthesis model. The separation performance can be improved when a realistic synthesis model is given since the proposed network can be trained with any synthesis methods.

Figure 8: Sample results from unsupervised reflection separation for synthetic images.
(a) Input
(b) [2]
(c) [4]
(d) [4]
(e)
(f)
(g)
(h)
(i)
Figure 9: Single image reflection separation results on real images.
(a) Input
(b)
(c)
(d)
(e)
(f)
Figure 10: Different separation results for the same input image.

For a weakly supervised reflection separation, the training data is split into two halves. The first half is used to synthesize and the second half is fed to the discriminator as real images for training. Figure 8 shows that the proposed algorithm separates a synthesized scene well without any ground truth images of transmission and reflection.

4.2 Real Images

Figure 9 shows results corresponding to the network in Figure 2(d) trained with real data in a weakly supervised manner. Input images contain not only sharp or blurry reflections but also spatially non-uniform reflection regions. For example, at the second row of Figure 9(a), most parts of cafe interiors are not visible. If reflections are simply suppressed, then the remaining transmitted scene would be dark and less informative. Moreover, it is incorrect to return a dark transmitted scene in this case since we can observe that lights of the cafe are turned on. Therefore, it is challenging to separate real input images as we need to infer invisible regions at the same time.

As the issue is not handled in state-of-the-art methods, they rarely suppress or separate reflections as shown in Figure 9(b), Figure 9(c) and Figure 9(d). On the other hand, the proposed network can decompose an input image into two reasonable scenes. Figure 9(e) and Figure 9(f) show restored transmitted scene and reflected scene. The confidence map of a transmitted scene is shown in Figure 9(g). Figure 9(h) and Figure 9(i) are masked scenes for better visualization of the confidence map.

Single image reflection separation is an underdetermined problem which has many solution candidates. In Figure 10, we show separation results for the same input image using two different random seeds. For each pair, while networks capture similar parts for realistic separations, generated images are different regarding to structure and appearance of the scene.

5 Conclusions

We propose an algorithm for single image reflection separation problem. As it is an ill-posed problem, we assume that categories of transmission and reflection scenes are known. It allows us to remove conventional assumptions, such as the blurry reflected scene, that are not realistic in many cases. We design convolutional neural networks based on adversarial losses to separate an observed image into the transmitted scene and reflected scene by generating them. Experimental results show that the proposed algorithm performs favorably against the state-of-the-art methods, particularly for recovering reflected scenes. For synthetic images, the transmitted scene is reliably restored without knowing the type of two scenes. In addition, we demonstrate that the network can be trained in a weakly-supervised manner, i.e., the network is trained on real images only.

References

  • [1] A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li. Removing photography artifacts using gradient projection and flash-exposure sampling. ACM Transactions on Graphics, 24(3):828–835, 2005.
  • [2] N. Arvanitopoulos Darginis, R. Achanta, and S. Süsstrunk. Single image reflection suppression. In

    Proc. of the IEEE Conference on Computer Vision and Pattern Recognition

    , 2017.
  • [3] H. Barrow and J. Tenenbaum. Computer vision systems. Computer vision systems, 2, 1978.
  • [4] Q. Fan, J. Yang, G. Hua, B. Chen, and D. Wipf. A generic deep architecture for single image reflection removal and image smoothing. Proc. of the IEEE International Conference on Computer Vision, 2017.
  • [5] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. In ACM transactions on graphics, volume 25, pages 787–794. ACM, 2006.
  • [6] K. Gai, Z. Shi, and C. Zhang. Blind separation of superimposed moving images using image statistics. IEEE transactions on pattern analysis and machine intelligence, 34(1):19–32, 2012.
  • [7] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
  • [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, 2014.
  • [9] B.-J. Han and J.-Y. Sim. Reflection removal using low-rank matrix completion. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [10] Z. Hu, S. Cho, J. Wang, and M.-H. Yang. Deblurring low-light images with light streaks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [11] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
  • [12] N. Kong, Y.-W. Tai, and J. S. Shin. A physically-based approach to reflection separation: from physical modeling to constrained optimization. IEEE transactions on pattern analysis and machine intelligence, 36(2):209–221, 2014.
  • [13] A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a sparsity prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9), 2007.
  • [14] A. Levin, A. Zomet, and Y. Weiss. Learning to perceive transparency from the statistics of natural scenes. In Advances in Neural Information Processing Systems, 2003.
  • [15] A. Levin, A. Zomet, and Y. Weiss. Separating reflections from a single image using local features. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2004.
  • [16] Y. Li and M. S. Brown. Single image layer separation using relative smoothness. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [17] M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, 2016.
  • [18] J. Pan, Z. Hu, Z. Su, and M.-H. Yang. Deblurring face images with exemplars. In Proc. of the European Conference on Computer Vision, 2014.
  • [19] J. Pan, Z. Hu, Z. Su, and M.-H. Yang. Deblurring text images via L0-regularized intensity and gradient prior. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
  • [20] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. of the International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015.
  • [21] B. Sarel and M. Irani. Separating transparent layers through layer information exchange. Proc. of the European Conference on Computer Vision, 2004.
  • [22] Y. Y. Schechner, N. Kiryati, and R. Basri. Separation of transparent layers using focus. International Journal of Computer Vision, 39(1):25–39, 2000.
  • [23] Y. Y. Schechner, J. Shamir, and N. Kiryati. Polarization and statistical analysis of scenes containing a semireflector. The Journal of the Optical Society of Ameriaca A, 17(2):276–284, 2000.
  • [24] Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman. Reflection removal using ghosting cues. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
  • [25] J. A. Soares. Introduction to optical characterization of materials. In Practical Materials Characterization, pages 43–92. 2014.
  • [26] R. Szeliski, S. Avidan, and P. Anandan. Layer extraction from multiple images containing reflections and transparency. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2000.
  • [27] R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. Benchmarking single-image reflection removal algorithms. In Proc. of the IEEE International Conference on Computer Vision, 2017.
  • [28] R. Wan, B. Shi, T. A. Hwee, and A. C. Kot. Depth of field guided reflection removal. In IEEE International Conference on Image Processing, 2016.
  • [29] T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. A computational approach for obstruction-free photography. ACM Transactions on Graphics, 34(4):79, 2015.
  • [30] Q. Yan, Y. Xu, X. Yang, and T. Nguyen. Separation of weak reflection from a single superimposed image. IEEE Signal Processing Letters, 21(10):1173–1176, 2014.
  • [31] C.-Y. Yang, S. Liu, and M.-H. Yang. Structured face hallucination. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2013.
  • [32] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba.

    Places: A 10 million image database for scene recognition.

    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
  • [33] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.