Single image reflection separation aims to separate an observed image into a transmitted scene and a reflected scene. When two scenes are separated adequately, existing computer vision algorithms can better understand each scene since an interference of the other one is decreased. As there are various objects that may reflect surroundings, such as windows, glass, or ponding water, this is an important problem to the computer vision community.
Although this problem has been studied for decades , it is still a challenging problem due to several reasons. First, it is an ill-posed problem as we need to infer two scenes based on an observed image. Numerous methods have been proposed to address this issue by making certain assumptions to make the problem tractable. However, the assumptions have been limited to specific cases and are not applicable to real-world images in general . For example, one of the mainstream approaches assumes that the edge distribution between the transmitted scene and reflected scene is different, i.e., the former tends to have sharper edges while the latter is relatively blurred [2, 16, 4]. This type of blur occurs when the reflected scene is outside of the depth of field of the camera. However, cameras in recent years, such as the ones on smartphones, have small aperture and deep depth of field. Consequently, an observed image contains sharp edges of both scenes although the reflected scene is not in the same depth of the transmitted scene. In other words, the blur assumption does not hold in practice.
Second, it is difficult to obtain ground-truth tuples of an observed scene , a transmitted scene , and a reflected scene as we cannot simply get rid of the obstructing glass and take pictures of two scenes in general circumstances111Note that a transmitted scene and a reflected scene represent a scene before the transmission and reflection, respectively.. Therefore, it has been assumed that an observed image can be synthesized by combining two known images based on the physical model of the transmission and reflection. However, the exact physical model is unknown as it is complex and requires various factors, such as the thickness of glass, surface conditions, and angle of incidence, which are typically not at our disposal . As a consequence, simple approximations are used to model the reflection which limits the separation performance. For example, one of the most common approximated models is , where is a scalar weight. However, it does not consider changes in and due to the glass.
Third, conventional approaches mainly focus on reflection removal instead of reflection separation [9, 2, 28, 29]. Although these methods aim to suppress reflection artifacts to restore the transmitted scene, the contents of the reflection are usually not considered. While it may be difficult to reconstruct both scenes, it is important to infer both scenes jointly as they are entangled in a single observation. Furthermore, the recovered reflected scene itself may be useful for various applications such as surveillance and image understanding.
To address above issues, we first propose to use a new assumption on the observed scene: the category of the transmitted and reflected scenes are known. It is a valid assumption in practice since we take a picture of an interested object or scene while knowing which is being reflected. Likewise, other ill-posed problems such as deblurring and super-resolution often make similar assumptions, e.g., in the cases of faces, text, and night scenes[18, 19, 10, 31]. As such, we demonstrate that it is not necessary to make other assumptions, e.g., a blurry reflected scene. In addition, our algorithm does not rely on a certain approximation model of the reflection as it may restrict the algorithm to a specific case. Instead, we leverage the fact that an observed image contains contents of the transmitted scene and reflected scene. It leads us to model an observed image using a feature space instead of a pixel-level combination.
Based on the above assumption, we pose the reflection separation problem as a conditional image generation task (see Figure 1). Although image generation is a difficult problem, notable success has been achieved that transforms an input image into other domains or styles [11, 7, 33]. In this work, we use generative adversarial networks (GAN)  to infer two scenes jointly based on a novel network architecture. By generating images, the proposed algorithm makes a key difference from previous methods of reflection separation as plausible reflected scenes can be obtained. Furthermore, the network can be trained in a weakly-supervised manner, i.e., only labels of transmitted and reflected scenes are needed. It enables us to train the network using real data without tedious effort gathering corresponding ground truths for and .
We carry out experiments on real data collected from the internet and synthetic data based on the Places dataset  which consists of 8 million images of 365 scenes. For both datasets, we evaluate the proposed algorithm against the state-of-the-art methods for single image reflection separation. Quantitative and qualitative results show that the proposed algorithm suppresses reflection artifacts on the transmitted scene and infers the reflected scene properly.
2 Related Work
As reflection separation is an ill-posed problem, additional information or assumptions are needed to make the problem tractable. In earlier methods, multiple images of a target scene taken under different conditions are used. For example, focus/defocus pairs , flash/non-flash pairs , or different polarization angle pairs [23, 12] are utilized. For videos, it is possible to decorrelate the motion between the transmitted scene and reflected scene [21, 26, 6]. However, it may be difficult to apply these methods in practice since multiple images captured from controlled experimental setups are not always available.
Reflection separation using a single image has recently attracted increasing attention due to its practical importance, although the problem is more difficult than multiple-image cases. User annotations can guide the separation by formulating it as a constrained optimization problem which relies on a sparse gradient prior of natural images .
For automatic single image reflection separation, existing methods focus on each of the following three different conditions. First, the depth of field (DoF) of a lens is shallow and the image is focused on a transmitted scene. It makes out-of-focus blur for a reflected scene when the distance from the window is not the same as the transmitted scene. Thus, blurry edges become useful cues for reflection separation. Wan et al.  propose a pixel-wise DoF confidence map obtained by a multi-scale search. In 
, a method based on a Markov random field and expectation maximization is proposed to filter out weak edges. The energy function of a Markov random field is composed of gradient profile sharpness and spatial smoothness of edges. A recent optimization based approach uses a Laplacian data fidelity term and an prior term to suppress reflections. Fan et al.  propose a two-step deep architecture based on convolutional neural networks. Given an input image and an edge map, it first predicts edges of a target scene and then reconstructs the target scene based on the input image and predicted edges.
Second, the DoF is deep and covers both transmitted and reflected scenes. Thus, the observed image contains sharp edges from both scenes. While this happens frequently, it is the most challenging issue due to the lack of visual clues for separation. Levin et al. [14, 15] rely on a prior that gradient and local features of natural images are statistically sparse . However, it is difficult to apply above methods when textures of an image become complex.
Third, the DoF is deep and glass is relatively thick. Then, a ghost effect of the reflected scene is caused by light rays that penetrate the outer surface of the glass but reflect on the inner surface. Shih et al. estimate a ghost kernel and use a Gaussian mixture model to separate the scenes. However, it is not applicable to other types of reflections that do not have notable ghost effects.
3 Proposed Algorithm
Details of each network. # Filter is the number of filters. BN is the batch normalization. Conv denotes a convolutional layer. F-Conv denotes a transposed convolutional layer that uses the fractional-stride.
Figure 2 shows design options of network architectures for a single image reflection separation. From baseline models to the proposed network, we discuss the limitations and remedies of each model. All networks basically have two generation branches; one for a transmitted scene and the other for a reflected scene. For realistic generations, we apply adversarial losses  using discriminators. Let and denote pairs of a generator and a discriminator for transmission and reflection branches, respectively. Then, an adversarial loss for the transmission branch is defined as follows:
An adversarial loss for the reflection branch is defined in a similar way. The network architecture is described in Table 1.
3.1 Reflection Separation using Synthetic Images
This approach trains a network to separate synthesized images and evaluate on real images. We first describe our synthesis schemes and then discuss network designs.
where is a scalar weight, is a blurring kernel, is a ghost kernel, and denotes a convolution operation. These models assume blurry edges or ghost effects of a reflected scene to make the problem tractable. We aim to drop such assumptions and thus two more synthesis models are added, i.e., a linear model and a clipping model without blurring step. In addition, we change the ghost model as for training since the model in  does not define and as the original scenes before the transmission and reflection222For the original ghost model, is outside of when and since ..
There are parameters in each model, e.g., in the blurring model. For flexibility and generalization ability, we synthesize images using random parameters for each mini-batch. For example, we pick uniformly at random while other methods use one or two fixed values [2, 16]. In addition, we perform random left-right flipping and cropping for data augmentation. To the best of our knowledge, this is the first work that deals with these diverse reflection models.
Network design for synthetic image reflection separation. The first baseline model simply maps an input image into two domains using an encoder and two domain-specific decoders based on the U-net architecture  as shown in Figure 2
(a). The loss function for this network is an extension of as follows:
where controls relative importance of objectives. It asks generators to not only fool discriminators but also to be similar to ground truth images. However, this simple extension renders numerous artifacts and fails to separate two scenes suitably, as shown in Figure 3. We attribute this to a lack of communication between two generation branches. As two images should be generated jointly in our problem, we need extra channels to communicate with each other.
To address this issue, we add a reconstruction branch, , and share weights of the first few layers with generation branches as shown in Figure 2(b). In this case, the loss function is:
Note that this is different from the weight sharing scheme of : early weights of two generation branches are shared (i.e., semantics between two domains are shared). On the other hand, in our problem, the semantics are often not shared between a transmitted scene and a reflected scene. Instead, both scenes share semantics with an observed image. Thus, by putting a reconstruction branch in the middle, all networks can communicate with each other during generation. This approach helps decrease artifacts and separates the scene better as shown in Figure 3.
The third model is shown in Figure 2(c). It aims to make use of high-level information in addition to the pixel-level appearance to assess generated images. In this paper, we minimize the difference of contents between generated images and the input image. A straightforward solution is to put a constraint that generated images should reconstruct an input image as faithfully as possible using the synthesis model. However, it is a limited approach since the real synthesis models are unknown or non-differentiable, e.g., . To address this issue, we compare feature maps of the input and generated images. In Figure 2(c), we introduce a new variable which estimates the ratio of contents between a transmitted scene and reflected scene in . Using this parameter, a content loss between the observed scene and generated scenes is defined as follows:
where and denote a feature map and a volume of the -th layer of the encoder, respectively. It allows us to train the network for arbitrary synthesis models. The loss for the network is defined as follows:
where are used for all experiments.
3.2 Reflection Separation using Real Images
In this section, we describe how to train a network using real observations without ground truth images of the transmitted scene and reflected scene. Note that the discussed networks so far rely on synthetic images since it is difficult to obtain ground truths of corresponding and . Even in the recent attempt for data collection , the original scene before reflection cannot be obtained. It limits the performance of reflection separation due to the difference between the real observation and synthesized images.
One of the largest gaps between real and synthesized images is due to the combination weight in the synthesis model. For real images, reflected scenes are often dominant in certain regions of an observed image where transmitted signals are unrecognizable and can only be guessed. For example, as shown in Figure 4, cafe interiors are not visible due to reflected buildings. However, state-of-the-art synthesis methods use a scalar to combine two scenes. To alleviate this issue, we predict a mask map using a new branch instead of a scalar as shown in Figure 2(d). For each location, its value is between which represents a confidence score that the pixel belongs to a transmitted scene. As such, we have and , where denotes pixel-wise multiplication and and are defined for notational simplicity. The confidence map prediction branch and other generation branches are trained iteratively.
The network can be trained in a weakly supervised manner, i.e., tuples of are given where and are images belonging to the same categories of transmitted and reflected scenes included in , respectively. In this case, and are not conditioned on . Thus, (1) is changed to:
In addition, a loss function in (5
) should be changed for weakly supervised learning. By combining all the losses, the overall loss function becomes
It only uses and to compute the adversarial loss which does not need the exact ground truth data of . Note that it allows us to learn reflection separation problem without any approximations for synthetic images.
4 Experimental Results
We first describe the experimental settings. We use the Places dataset which contains 8 million images of 365 scenes to synthesize images. As shown in Figure 5, it has diverse images that cover large variations in the lighting condition, viewpoint, distance to the scene, and a number of objects in the scene. For each experiment, two scenes are selected to synthesize training images. For real images, we collect 178 photos of a cafe taken from the outside, from the Internet. As shown in Figure 4, the images contain challenging reflections.
|: Airfield, : Hotel room||||PSNR||14.4||13.1||17.3||17.4||18.2||10.1||10.2||-||7.28||18.9|
| without blur assumption||PSNR||13.8||-||17.4||17.4||21.5||10.0||-||-||7.2||17.5|
| without blur assumption||PSNR||13.6||-||15.1||15.6||22.8||9.67||-||-||7.3||17.5|
|: Skyscraper, : Conference room||||PSNR||14.5||14.5||17.6||17.4||21.1||9.80||10.8||-||7.30||16.5|
| without blur assumption||PSNR||14.3||-||17.0||16.7||20.4||9.70||-||-||7.1||16.8|
| without blur assumption||PSNR||12.9||-||14.5||15.6||20.7||9.24||-||-||7.3||16.3|
For training, networks shown in Figure 2(b), Figure 2(c), and Figure 2(d) share weights of the first three layers in the generator. For each mini-batch, we first resize images into pixels. A rectangular patch is cropped with a height and a width sampled from uniformly at random. Then, it is resized to pixels for training. For the blurring model, we pick and of the Gaussian kernel between . More quantitative and qualitative results are presented in the supplementary material. The source code will be made available to the public.
4.1 Synthetic Images
In this section, we show reflection separation results using a network trained on synthetic data. We evaluate the proposed algorithm against the state-of-the-art methods [4, 2, 24]. As they use different approximation methods to model the reflection, we provide results for all cases.
Figure 6 shows reflection separation results when the clipping model  is used to synthesize images. We alternatively use the blur assumption for each row. In most cases, the proposed network in Figure 2(c) successfully removes reflection artifacts on the transmitted scene and recovers the reflected scene reasonably. In contrast, other methods generate unclear images of transmitted scenes and barely recognizable reflected scenes. When the blurry assumption is not used, all other methods fail to suppress reflections. In addition, they are sensitive to the synthesis model. The results show that other methods are designed specifically based on their assumptions. On the other hand, the proposed algorithm performs favorably in all cases. Table 2 shows that the proposed algorithm performs favorably against other methods quantitatively.
A network trained with two categories is evaluated for different types of scenes. Figure 7 shows the results when the proposed network is trained on airfield and hotel room categories and then tested on various scenes using the blurring model . When a category of the reflected scene is changed as shown in Figure 7(a), or both are changed as shown in Figure 7(b), the transmitted scene is well restored while the reflected scene is not realistic. On the other hand, when the transmitted scene is changed as shown in Figure 7(c) and Figure 7(d), both scenes are recovered properly. These results indicate that the transmission branch learns how to remove blurry regions and generate the image based on the context of sharp regions while the reflection branch focuses on recovering the original scene from a blurry observation. However, as presented in the supplementary material, the network is not able to separate reflections on real images. The state-of-the-art methods based on the same synthesis models also fail to separate reflections. The findings are consistent with the observation in  that existing methods do not perform well on real images. In addition, we put randomness while synthesizing training images to increase the generalization ability of the network as mentioned before. We attribute this failure to a non-realistic synthesis model. The separation performance can be improved when a realistic synthesis model is given since the proposed network can be trained with any synthesis methods.
For a weakly supervised reflection separation, the training data is split into two halves. The first half is used to synthesize and the second half is fed to the discriminator as real images for training. Figure 8 shows that the proposed algorithm separates a synthesized scene well without any ground truth images of transmission and reflection.
4.2 Real Images
Figure 9 shows results corresponding to the network in Figure 2(d) trained with real data in a weakly supervised manner. Input images contain not only sharp or blurry reflections but also spatially non-uniform reflection regions. For example, at the second row of Figure 9(a), most parts of cafe interiors are not visible. If reflections are simply suppressed, then the remaining transmitted scene would be dark and less informative. Moreover, it is incorrect to return a dark transmitted scene in this case since we can observe that lights of the cafe are turned on. Therefore, it is challenging to separate real input images as we need to infer invisible regions at the same time.
As the issue is not handled in state-of-the-art methods, they rarely suppress or separate reflections as shown in Figure 9(b), Figure 9(c) and Figure 9(d). On the other hand, the proposed network can decompose an input image into two reasonable scenes. Figure 9(e) and Figure 9(f) show restored transmitted scene and reflected scene. The confidence map of a transmitted scene is shown in Figure 9(g). Figure 9(h) and Figure 9(i) are masked scenes for better visualization of the confidence map.
Single image reflection separation is an underdetermined problem which has many solution candidates. In Figure 10, we show separation results for the same input image using two different random seeds. For each pair, while networks capture similar parts for realistic separations, generated images are different regarding to structure and appearance of the scene.
We propose an algorithm for single image reflection separation problem. As it is an ill-posed problem, we assume that categories of transmission and reflection scenes are known. It allows us to remove conventional assumptions, such as the blurry reflected scene, that are not realistic in many cases. We design convolutional neural networks based on adversarial losses to separate an observed image into the transmitted scene and reflected scene by generating them. Experimental results show that the proposed algorithm performs favorably against the state-of-the-art methods, particularly for recovering reflected scenes. For synthetic images, the transmitted scene is reliably restored without knowing the type of two scenes. In addition, we demonstrate that the network can be trained in a weakly-supervised manner, i.e., the network is trained on real images only.
-  A. Agrawal, R. Raskar, S. K. Nayar, and Y. Li. Removing photography artifacts using gradient projection and flash-exposure sampling. ACM Transactions on Graphics, 24(3):828–835, 2005.
N. Arvanitopoulos Darginis, R. Achanta, and S. Süsstrunk.
Single image reflection suppression.
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  H. Barrow and J. Tenenbaum. Computer vision systems. Computer vision systems, 2, 1978.
-  Q. Fan, J. Yang, G. Hua, B. Chen, and D. Wipf. A generic deep architecture for single image reflection removal and image smoothing. Proc. of the IEEE International Conference on Computer Vision, 2017.
-  R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. In ACM transactions on graphics, volume 25, pages 787–794. ACM, 2006.
-  K. Gai, Z. Shi, and C. Zhang. Blind separation of superimposed moving images using image statistics. IEEE transactions on pattern analysis and machine intelligence, 34(1):19–32, 2012.
-  L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems, 2014.
-  B.-J. Han and J.-Y. Sim. Reflection removal using low-rank matrix completion. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  Z. Hu, S. Cho, J. Wang, and M.-H. Yang. Deblurring low-light images with light streaks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image translation with conditional adversarial networks. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  N. Kong, Y.-W. Tai, and J. S. Shin. A physically-based approach to reflection separation: from physical modeling to constrained optimization. IEEE transactions on pattern analysis and machine intelligence, 36(2):209–221, 2014.
-  A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a sparsity prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9), 2007.
-  A. Levin, A. Zomet, and Y. Weiss. Learning to perceive transparency from the statistics of natural scenes. In Advances in Neural Information Processing Systems, 2003.
-  A. Levin, A. Zomet, and Y. Weiss. Separating reflections from a single image using local features. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2004.
-  Y. Li and M. S. Brown. Single image layer separation using relative smoothness. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
-  M.-Y. Liu and O. Tuzel. Coupled generative adversarial networks. In Advances in neural information processing systems, 2016.
-  J. Pan, Z. Hu, Z. Su, and M.-H. Yang. Deblurring face images with exemplars. In Proc. of the European Conference on Computer Vision, 2014.
-  J. Pan, Z. Hu, Z. Su, and M.-H. Yang. Deblurring text images via L0-regularized intensity and gradient prior. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2014.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In Proc. of the International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015.
-  B. Sarel and M. Irani. Separating transparent layers through layer information exchange. Proc. of the European Conference on Computer Vision, 2004.
-  Y. Y. Schechner, N. Kiryati, and R. Basri. Separation of transparent layers using focus. International Journal of Computer Vision, 39(1):25–39, 2000.
-  Y. Y. Schechner, J. Shamir, and N. Kiryati. Polarization and statistical analysis of scenes containing a semireflector. The Journal of the Optical Society of Ameriaca A, 17(2):276–284, 2000.
-  Y. Shih, D. Krishnan, F. Durand, and W. T. Freeman. Reflection removal using ghosting cues. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2015.
-  J. A. Soares. Introduction to optical characterization of materials. In Practical Materials Characterization, pages 43–92. 2014.
-  R. Szeliski, S. Avidan, and P. Anandan. Layer extraction from multiple images containing reflections and transparency. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2000.
-  R. Wan, B. Shi, L.-Y. Duan, A.-H. Tan, and A. C. Kot. Benchmarking single-image reflection removal algorithms. In Proc. of the IEEE International Conference on Computer Vision, 2017.
-  R. Wan, B. Shi, T. A. Hwee, and A. C. Kot. Depth of field guided reflection removal. In IEEE International Conference on Image Processing, 2016.
-  T. Xue, M. Rubinstein, C. Liu, and W. T. Freeman. A computational approach for obstruction-free photography. ACM Transactions on Graphics, 34(4):79, 2015.
-  Q. Yan, Y. Xu, X. Yang, and T. Nguyen. Separation of weak reflection from a single superimposed image. IEEE Signal Processing Letters, 21(10):1173–1176, 2014.
-  C.-Y. Yang, S. Liu, and M.-H. Yang. Structured face hallucination. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 2013.
B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba.
Places: A 10 million image database for scene recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
-  J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593, 2017.