Shadows are present in most natural images. Shadow effects make objects harder to detect or segment , and scenes with shadows are harder to process and analyze . Realistic shadow removal is an integral part of image editing 
and can greatly improve performance on various computer vision tasks[32, 41, 56, 24, 21], getting increased attention in recent years [37, 13, 11]
. Data-driven approaches using deep learning models have achieved remarkable performance on shadow removal[5, 22, 17, 15, 47, 55] thanks to recent large-scale datasets [45, 47].
Most of the current deep-learning shadow removal approaches are end-to-end mapping functions trained in a fully supervised manner. Such systems require pairs of shadow images and their shadow-free counter-parts as training signals. However, this type of data is cumbersome to obtain, lacks diversity, and is error-prone: all current shadow removal datasets exhibit color mismatches between the shadow images and their shadow-free ground truth (see Fig. 1 - left panel). Moreover, there are no images with self-cast shadows because the occluders are never visible in the image in the current data acquisition setups [47, 37, 15]. This dependency on paired data significantly hinders building large-scale, robust shadow-removal systems. A recent method trying to overcome this issue is MaskShadow-GAN , which learns shadow removal from unpaired shadow and shadow-free images. However, such cycle-GAN  based systems usually require enough statistical similarity between the two sets of images [25, 2]. This requirement can be hard to satisfy when capturing shadow-free images is tricky, such as shadow-free images of urban areas  or moving objects [18, 36].
In this paper, we propose an alternative solution to the data dependency issue. We first observe that image patches alongside the shadow boundary contain critical information for shadow removal, including non-shadow, umbra and penumbra areas. They sufficiently reflect the characteristics of the shadowing effects, including the color differences between shadow and non-shadow areas as well as the gradual changes of the shadow effects across the shadow boundary [34, 33, 14]. If we further assume that the shadow effects are fairly consistent in the umbra areas, a patch-based shadow removal can be used to remove shadows in the whole image. Based on this observation, we propose training a patch-based shadow removal system for which we use unpaired shadow and non-shadow patches directly cropped from the shadow images themselves as training data. This approach eliminates the dependency on paired training data and opens up the possibility of handling different types of shadows, since it can be trained with any kind of shadow image. Compared to MaskShadow-GAN, shadow and non-shadow patches cropped from the same image naturally ensure significant statistical similarity. The only supervision required in this data processing scheme are the shadow masks, which are relatively easy to obtain, either manually, semi-interactively [45, 11], or automatically using shadow detection methods [5, 59, 57, 23]. Automatic shadow detection is improving, with the main challenge being generalization across datasets. At some point, one can expect to get very good shadow masks automatically, which would allows training our shadow removal method with very little annotation cost.
In particular, to obtain shadow and shadow-free patches, we crop the shadow images into small overlapping patches of size with a step size of . Based on the shadow masks, we group these patches into three sets: a non-shadow set () containing patches having no shadow pixels, a shadow-boundary set () containing patches lying on the shadow boundaries, and a full-shadow set () containing patches where all pixels are in shadow. With small enough patch size and step size , we can obtain enough training patches in each set. With this training set, we train a shadow removal system to learn a mapping from patches in the shadow-boundary set to patches in the non-shadow set . Essentially, this mapping needs to infer the color difference alongside the shadow edges, including the chromatic attributes of the light source and the smooth change of the shadow effects across the shadow boundary, in order to transform a shadow patch to a non-shadow patch. This is, in spirit, similar to early shadow removal approaches that focus on shadow edges to remove shadows [38, 9, 8, 44, 46].
that can be estimated via an adversarial framework. In particular, we seek a mapping functionthat takes as input a shadow-boundary patch from the set , and outputs an image patch , such that a critic function cannot distinguish whether was drawn from the non-shadow set or generated by . Note that one potential solution here is to use Cycle-GAN or MaskShadow-GAN to estimate this transformation. However, the mapping functions learned by these methods are not able to remove shadows from patches in the full-shadow set .
Training such an unpaired image-to-image mapping for shadow removal is challenging. The mapping is under-constrained and training can collapse easily. [12, 28, 27, 30, 42, 31]. Here, we propose to systematically constrain the shadow removal process by a physical model of shadow formation  and incorporate a number of physical properties of shadows into the framework. We show that these physics-based priors define a transformation closely modelling shadow removal. Driven by an adversarial signal, our framework effectively learns physically-plausible shadow removal without any direct supervision from paired data. Specifically, we constrain the shadow removal process to a shadow image decomposition model  that extracts a set of parameters and a matting layer from the shadow image. This set of shadow parameters is responsible for removing shadows on the umbra areas of the shadows via a linear function. Thus, once we estimate these shadow parameters from shadow boundary patches, we can use them to remove shadows for patches fully covered by the same shadow under the assumption that they share the same set of shadow parameters. Based on the physical properties of shadows, we apply the following constraints to the model:
We limit the search space of the shadow parameters and shadow matte to the appropriate value ranges that correspond to shadow removal.
Our matting and smoothness losses ensure that shadow removal only happens in the shadow areas and transitions smoothly across shadow boundaries.
Our boundary loss on the generated shadow-free image enforces color similarity between the inner and outer areas alongside shadow boundaries.
With these constraints and the adversarial signal, our method achieves shadow removal results that are competitive with state-of-the-art methods that were trained in a fully supervised manner with paired shadow and non-shadow images [22, 47, 37]. We further compare our method to state-of-the-art methods on a novel and challenging video shadow removal dataset including static videos with various scenes and shadow conditions. This test exposes the weaknesses of data-driven methods trained on datasets lacking diversity. Our patch-based method seems to generalize better than other methods when evaluated on this video shadow removal test. Most importantly, we can easily fine-tune our pre-trained model on a single testing video to further improve shadow removal results, showcasing this advantage of our training scheme.
In short, our contributions are:
We propose the use of an adversarial critic to train a shadow remover from unpaired shadow and non-shadow patches, providing an alternative solution to the paired data dependency issue.
We propose a set of physics-based constraints that define a transformation closely modelling shadow removal, which enables shadow remover training with only an adversarial training signal.
Our system trained without any shadow-free images has competitive results compared to fully-supervised state-of-the-art methods on the ISTD dataset.
We collect a novel video shadow removal dataset. Our shadow removal system can be fine-tuned for free to better remove shadows on testing videos.
2 Related Work
Shadows are physical phenomena. Early shadow removal works, without much training data, usually focused on studying different physical shadow properties [8, 7, 9, 6, 1, 10, 26, 53]. Many works look for cues to remove shadows starting from shadow edges. Finlayson et al. used shadow edges to estimate a scaling factor that differentiates shadow areas from their non-shadow counterparts. Wu & Tang  imposed a smoothness constraint alongside the shadow boundaries to handle penumbra areas. Wu et al. detected strong shadow-edges to remove shadows on the whole image. Shor & Lischinki  defined an affine relationship between shadow and non-shadow pixels where they used the areas surrounding the shadow edges to estimate the parameters of such affine transforms.
estimated a matte layer representing the pixel-wise shadow probability to estimate a color transfer function to remove shadows. Chuanget al.  computed a shadow matte from video for shadow editing. They computed the lit and shadow images by finding min-max values at each pixel location throughout all frames of a video captured by a static camera. We use this technique to create a video dataset for testing shadow removal methods in Sec. 4.4.
Current shadow removal methods [22, 17, 55, 5, 47] use deep-learning models trained with full supervision on large-scale datasets [47, 37] of paired shadow and shadow-free images. Pairs are obtained by taking a photo with shadows, then removing the occluders from the scene to take the photo without shadows. Deshadow-Net  extracted multi-context features to predict a matte layer that removes shadows. Some works use adversarial frameworks to train their shadow removal. In  a unified adversarial framework predicted shadow masks and removed shadows. Similarly, Ding et al. used an adversarial signal to improve shadow removal in an iterative manner. Note that these methods use the shadow-free image as the main training signal while our method is trained only through an adversarial loss. In prior work  we constrained shadow removal by a physical model of shadow formation. We trained networks to extract shadow parameters and a matte layer to remove shadows. We adapt this model to patch-based shadow removal. Note that in , all shadow parameters and matting layers were pre-computed using paired training images and the network was trained to simply regress those values, whereas our model automatically estimates them through adversarial training. MaskShadow-GAN  is the only deep-learning method that learns shadow removal from just unpaired training data.
We describe our patch-based shadow removal in Sec. 3.1. Our whole image pipeline for shadow removal is described in Sec. 3.2. For both image-level and patch-level shadow removal, we use shadow matting [3, 35, 40, 49] to express a shadow-free image by:
with the shadow image, the matting layer, and the relit image. The relit image contains shadow pixels relit to their non-shadow values, computed via a linear function following a physical shadow formation model [22, 39]:
The unknown factors in this shadow matting formula are the set of shadow parameters which define the linear function that removes the shadow effects in the umbra areas of the shadow, and the matte layer that models the shadow effects on the shadow boundaries. We train a system of three networks to estimate these unknown factors via adversarial training. We use the annotated shadow segmentation masks for training. For testing, we obtain a segmentation mask for the image using the shadow detector proposed by Zhu et al. .
3.1 Patch-based Shadow Removal
Fig. 2 summarizes our framework to remove shadows from a single image patch, which consists of three networks: Param-Net, Matte-Net, and D-Net. Param-Net and Matte-Net predict the shadow parameters and the matte layer respectively to jointly remove shadows. D-Net is the critic distinguishing between the generated image patches and the real shadow-free patches. With Param-Net and Matte-Net being the generators and D-Net being the discriminator, the three networks form an adversarial training framework where the main source of training signal is the set of shadow-free patches.
In theory, as D-Net is trained to distinguish patches containing shadow boundaries from patches without any shadows, a natural solution to fool D-Net is to remove the shadows in the input shadow patches to make them indistinguishable from shadow-free patches. However, such an adversarial signal from D-Net alone often cannot guide the generators, (Param-Net and Matte-Net) to actually remove shadows. The parameter search space is very large and the mapping is extremely under-constrained. In practice, we observe that without any constraints, Param-Net tends to output consistently high values of as they would directly increase the overall brightness of the image patches, and Matte-Net tends to introduce artifacts similar to visual patterns frequently appearing in the non-shadow areas. Thus, our main idea is to constrain this framework with physical shadow properties. Constraining the output shadow parameters, shadow mattes, and combined shadow-free images, forces the networks to only transform the input images in a manner consistent with shadow removal.
First, Param-Net estimates a scaling factor and an additive constant , for each R,G,B color channel, to remove the shadow effects on the shadowed pixels in the umbra areas of the shadows via Eq. (2). Here we hypothesize that the main component that explains the shadow effects is the scaling factor . Accordingly, we bound its search space to the range . The minimum value of ensures that the transformation always scales up the values of the shadowed pixels. We set the search space for to the range where we choose a relatively small value of (the pixel intensity varies in the range [0,255]). Our intuition is to force the network to define the mapping mainly via the scaling factor . We choose . This upper bound of prevents the network from collapsing as increases. As we show in the ablation study, the network fails to learn a shadow removal without proper search space limitation.
Matte-Net estimates a blending layer that combines the shadow image patch and the relit image patch into a shadow-free image patch via Eq.1. The value of a pixel in the output image patch, , is computed as:
We map the output of Matte-Net to [0,1] as is being used as a matting layer and constrain the value of as follows:
If i indicates a non-shadow pixel, we enforce so that the value of the output pixel equals its value in the input image .
If i indicates a pixel in the umbra areas of the shadows, we enforce so that the value of the output pixel equals its relit value .
We do not control the value of in the penumbra areas of the shadows and rely on the training of the network to estimate these values.
where the umbra, non-shadow or penumbra areas can be roughly specified using the shadow mask. We define two areas alongside the shadow boundary, denoted as and - see Fig.3. is the area right outside the boundary, computed by subtracting the shadow mask, , from its dilated version . The inside area is computed similarly by subtracting an eroded shadow mask from the shadow mask. These two areas and roughly define a small area surrounding the shadow boundary, which can be considered as the penumbra area of the shadow. Then the above constraints are implemented as the matting loss computed by the following formula for every pixel :
Moreover, since the shadow effects are assumed to vary smoothly across the shadow boundaries, we enforce an smoothness loss on the spatial gradients of the matte layer, . This smoothness loss also prevents Matte-Net from producing undesired artifacts since it enforces local uniformity. This loss is:
Then, given a set of estimated parameters and a matte layer , we obtain an output image via the image decomposition formula (1). We penalize the difference between the average intensity of pixels lying right outside and inside the shadow boundary, which are the two areas and . This shadow boundary loss is computed by:
Last, we compute the adversarial loss with the feedback from D-Net:
where denotes the output of D-Net.
The final objective function to train Param-Net and Matte-Net is to minimize a weighted sum of the above losses:
All these losses are essential for training our networks, as shown in our ablation study in Sec. 4.3. By using all the proposed losses together, our method is able to automatically extract a set of shadow parameters and an layer from an input image patch. Fig. 4 visualizes the components extracted from our framework for two challenging input patches. In the first row, a dark shadow area is lit correctly to its non-shadow value. In the second row, the matte layer is not affected by the dark material of the surface.
3.2 Image Shadow Removal using a patch-based model.
We estimate a set of shadow parameters and a matte layer for the input image to remove shadows via Eq. (1). First, we obtain a shadow mask using the shadow detector of Zhu et al. . We crop the input shadow image into overlapping patches. All patches containing the shadow boundaries are then input into the three networks. We approximate the whole image shadow parameters from the patch shadow parameters, under the assumption that they share the same or very similar parameters. We simply compute the image shadow parameters as a linear combination of the patch shadow parameters. Similarly, we compute the values of each pixel in the matte layer by combining the overlapping matte patches. We set the matte layer pixels in the non-shadow area to and those in the umbra area to . We observe that the classification scores obtained from the critic function D-Net correlate with the quality of the generated image patches. Thus, we normalize these scores to sum to 1 and use them as coefficients for the linear combinations that form the image shadow parameters and matte layer.
4.1 Network Architectures and Implementation Details.
We use a VGG-19 architecture for Param-Net and a U-Net architecture for Matte-Net. D-Net is a simple 5-layer convolutional network. To map the outputs of the networks to a certain range, we use Tanh functions together with scaling and additive constants. We use stochastic gradient descent with the Adam solver to train our model. The initial learning rate for Matte-Net and D-Net is 0.0002 and for Param-Net is 0.00002. All networks were trained from scratch. We experimentally set our training parameters () to
. We train our network with batch size 96 for 150 epochs.111All code, trained models, and data are available at: https://www3.cs.stonybrook.edu/~cvl/projects/FSS2SR/index.html
We use the ISTD dataset  for training. Each original training image of size is cropped into patches of size with a step size of 32. This creates 311,220 image patches from 1,330 training shadow images. This training set includes 151,327 non-shadow patches, 147,312 shadow-boundary patches, and 12,581 full-shadow patches.
4.2 Shadow Removal Evaluation
We first evaluate our method on the adjusted testing set of the ISTD dataset [47, 22]. Following previous work [47, 14, 37, 22], we compute the root-mean-square-error (RMSE) in the LAB color space on the shadow area, non-shadow area, and the whole image, where all shadow removal results are re-sized to . Note that our method can take any size image as input. We used the Zhu et al.  shadow detector, pre-trained on the SBU dataset and fine-tuned on the ISTD dataset, to obtain the shadow masks for our testing, as in .
|Yang et al. ||-||24.7||14.4||16.0|
|Guo et al. ||Shd. Free + Shd. Mask||22.0||3.1||6.1|
|Gong et al. ||-||13.3||-||-|
|ST-CGAN ||Shd. Free + Shd. Mask||13.4||7.7||8.7|
|DeshadowNet ||Shd. Free||15.9||6.0||7.6|
|MaskShadow-GAN ||Shd. Free (Unpaired)||12.4||4.0||5.3|
|SP+M-Net ||Shd. Free + Shd.Mask||7.9||3.1||3.9|
In Table 1, we compare our weakly-supervised methods with the recent state-of-the-art methods of Guo et al. , Gong et al. , Yang et al. , ST-CGAN , DeshadowNet , MaskShadow-GAN , and SP+M-Net . The second column shows the training data of each method. All other deep-learning methods require paired shadow-free images as training signal except MaskShadow-GAN, which is trained on unpaired shadow and shadow-free images from the ISTD dataset. ST-CGAN and SP+M-Net also require the training shadow masks. Our method, trained without any shadow-free image, got 9.7 RMSE on the shadow areas, which is competitive with SP+M-Net. However, SP+M-Net requires full supervision.
Our method outperforms MaskShadow-GAN by 22%, reducing the RMSE in the shadow area from 12.4 to 9.7 while also achieving lower RMSE on the non-shadow area. We outperform DeshadowNet and ST-CGAN, two methods that were trained with paired shadow and shadow-free images, reducing the RMSE by 38% and 26% respectively.
Fig. 5 compares qualitative shadow removal results from our method with other state-of-the-art methods on the ISTD dataset. Our method, trained with just an adversarial signal, produces clean shadow-free images with very few artifacts. On the other hand, ST-CGAN and MaskShadow-GAN tend to produce blurry images, introduce artifacts, and often relight the wrong image parts. Our method generates images which are visually similar to that of SP+M-Net. While SP+M-Net is less affected by the error in the shadow masks (shown in the 2nd row), our method generates images with more consistent colors between areas inside and outside the shadow boundaries (3rd and 4th rows). In all cases, our method preserves almost perfectly the textures beneath the shadows (last row).
4.3 Ablation Studies
We conduct ablation studies to better understand the effects of each proposed component in our framework. Starting from the original model with all the proposed features and losses, we train new models removing the proposed components one at a time. Table 2 summarizes these experiments. The first row shows the results of our model when we set the search space of the scaling factor to and the search space of the additive constant to . In this case, the model collapses and consistently outputs uniformly dark images. Similarly, the model collapses when we omit the boundary loss . We observe that this loss is essential in stabilizing the training as it prevents the Param-Net from outputting consistently high values.
The matting loss and loss are critical for learning proper shadow removal. We observe that without the matting loss
, the model behaves similarly to an image inpainting model where it tends to modify all parts of the images to fool the discriminator. Last, dropping the smoothness lossonly results in a slight drop in shadow removal performance, from 9.7 to 10.2 RMSE on the shadow areas. However, we observe more visible boundary artifacts on the output images without this loss.
|Ours w/o limiting search space||47.5||2.9||9.9|
4.4 Video Shadow Removal
Video Shadow Removal is challenging for shadow removal methods. A video sequence has hundreds of frames with changing shadows. It is even harder for videos with a moving camera, moving objects, and illumination changes.
To better evaluate the performance of shadow removal methods in videos, we collected a set of 8 videos, each containing a static scene without visible moving objects. We cropped those videos to obtain clips with the only dominant motions caused by the shadows (either by direct light motion or motion of the unseen occluders). As can be seen from the top row of Fig. 6, the dataset includes videos containing shadows cast by close-up occluders, far distance occluders, videos with simple-to-complex shadows, and shadows on various types of backgrounds and materials. Inspired by , we propose a “max-min” technique to obtain a single pseudo shadow-free frame for each video: since the camera is static and there is no visible moving object in the frames, the changes in the video are caused by the moving shadows. We first obtain two images and by taking the maximum and minimum intensity values at each pixel location across the whole video. is then the image that contains the shadow-free values of pixels if they ever go out of the shadows. Similarly, their shadowed values, if they ever go into the shadows, are captured in . Fig. 6 shows these two images for a video named “plant”. From these two images, we can trivially obtain a mask, namely moving-shadow , marking the pixels appearing in both the shadow and non-shadow areas in the video:
where we set a small threshold of . This method allows us to obtain pairs of shadow and non-shadow pixel values in the moving-shadow mask, , for free.
To measure shadow removal performance, we input the frames of these videos into the shadow removal algorithm and measure the RMSE on the LAB color channel between the output frame and the image on the moving-shadow area . We compute RMSE on each video and take their average to measure the shadow removal performance on the whole dataset. Table 3 summarizes the performance of our methods compared to MaskShadow-GAN and SP+M-Net on these videos. Our method outperforms SP+M-Net and MaskShadow-GAN, reducing the RMSE by 5% and 11% respectively. As our method only needs shadow segmentation masks for training, we use a pre-trained shadow detection model  to obtain a set of shadow masks for each video. While these shadow mask sets are imperfect, fine-tuning our model using this free supervision results in a 10% error reduction, showing the advantage of our training scheme. Fig. 7 visualizes two example shadow removal results for different methods. We show a single input frame of each video. From left to right are the input frame, the shadow removal results of MaskShadow-GAN , the results of SP+M-Net , the results of our model trained on the ISTD dataset, and the result of our model fine-tuned with each testing video for 1 epoch. The top row shows an example where all methods perform relatively well. Our method seems to have better color balance between the relit pixels and the non-shadow pixels, although there is a visible boundary artifact due to imperfect shadow masks. After 1 epoch of fine-tuning, these artifacts are greatly suppressed. The bottom row shows a challenging case where all methods fail to remove shadows properly.
We presented a novel patch-based deep-learning model to remove shadows from images. This method can be trained on patches cropped directly from the shadow images, using the shadow segmentation mask as the only supervision signal. This obviates the dependency on paired training data and allows us to train this system on any kind of shadow image. The main contribution of this paper is a set of physics-based constrains that enable the training of this mapping. We have illustrated the effectiveness of our method on the standard ISTD dataset  and on our novel Video Shadow Removal dataset. As shadow detection methods mature with the aid of recently proposed shadow detection datasets [48, 16], our method can be trained to remove shadows for a very low annotation cost.
Acknowledgements. This work was partially supported by the Partner University Fund, the SUNY2020 ITSC, and a gift from Adobe. Computational support provided by IACS and a GPU donation from NVIDIA. We thank Kumara Kahatapitiya and Cristina Mata for assistance with the manuscript.
-  (2011) Shadow removal using intensity surfaces and texture anchor points. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, pp. 1202–1216. Cited by: §2.
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8789–8797. Cited by: §1, §1.
-  (2003-07) Shadow matting and compositing. ACM Transactions on Graphics 22 (3), pp. 494–500. Note: Sepcial Issue of the SIGGRAPH 2003 Proceedings Cited by: §1, §2, §3, §4.4.
-  (2005-02) Shadow analysis in high-resolution satellite imagery of urban areas. Photogrammetric Engineering and Remote Sensing 71, pp. 169–177. External Links: Cited by: §1.
-  (2019) ARGAN: attentive recurrent generative adversarial network for shadow detection and removal. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10212–10221. Cited by: §1, §1, §2.
-  (2003) Recovery of chromaticity image free from shadows via illumination invariance. In In IEEE Workshop on Color and Photometric Methods in Computer Vision, ICCV’03, pp. 32–39. Cited by: §2.
-  (2001-07) 4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities. In iccv, Vol. 2, pp. 473–480 vol.2. External Links: Cited by: §2.
-  (2006) On the removal of shadows from images. pami. Cited by: §1, §2.
-  (2002) Removing shadows from images. In eccv, ECCV ’02, London, UK, UK, pp. 823–836. External Links: Cited by: §1, §2.
-  (2005) Hamiltonian path based shadow removal. In BMVC, Cited by: §2.
-  (2016) Interactive removal and ground truth for difficult shadow scenes. J. Opt. Soc. Am. A 33 (9), pp. 1798–1811. External Links: Cited by: §1, §1, §4.2, Table 1.
-  (2017) Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5767–5777. Cited by: §1.
-  (2011) Single-image shadow detection and removal using paired regions. In cvpr, Cited by: §1.
-  (2012) Paired regions for shadow detection and removal. pami. Cited by: §1, §2, §4.2, §4.2, Table 1.
-  (2019) Mask-ShadowGAN: learning to remove shadows from unpaired data. In ICCV, Note: to appear Cited by: §1, §1, Figure 5, §4.2, §4.4, Table 1, Table 3.
-  (2019) Revisiting shadow detection: a new benchmark dataset for complex world. ArXiv abs/1911.06998. Cited by: §5.
-  (2018) Direction-aware spatial context features for shadow detection. In cvpr, Cited by: §1, §2.
-  (2002) An improved adaptive background mixture model for real- time tracking with shadow detection. Cited by: §1.
-  (2015) Adam: A method for stochastic optimization. In iclr, Cited by: §4.1.
-  (2019-06) Weakly labeling the antarctic: the penguin colony case. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
-  (2016) Geodesic distance histogram feature for video segmentation. In ACCV, Cited by: §1.
-  (2019) Shadow removal via shadow image decomposition. In iccv, Cited by: §1, §1, §1, §2, §3, Figure 5, §4.2, §4.2, §4.4, Table 1, Table 3.
-  (2018) A+D Net: training a shadow detector with adversarial shadow attenuation. In eccv, Cited by: §1, §1.
-  (2017) Co-localization with category-consistent features and geodesic distance propagation. In ICCV 2017 Workshop on CEFRL: Compact and Efficient Feature Representation and Learning in Computer Vision, Cited by: §1.
-  (2019) Asymmetric gan for unpaired image-to-image translation. IEEE Transactions on Image Processing 28, pp. 5881–5896. Cited by: §1.
-  (2008) Texture-consistent shadow removal. In ECCV, Cited by: §2.
-  (2019-10) Wasserstein gan with quadratic transport cost. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
A two-step computation of the exact gan wasserstein distance.
International Conference on Machine Learning, pp. 3165–3174. Cited by: §1.
-  (2017) Unsupervised image-to-image translation networks. ArXiv abs/1703.00848. Cited by: §1.
-  (2018) Which training methods for gans do actually converge?. In International Conference on Machine Learning, Cited by: §1.
-  (2018) Spectral normalization for generative adversarial networks. In International Conference on Machine Learning, Cited by: §1.
-  (2019) Brightness correction and shadow removal for video change detection with uavs. In Defense + Commercial Sensing, Cited by: §1.
-  (2010) Estimating shadows with the bright channel cue. CRICV. Cited by: §1.
-  (2013) Simultaneous cast shadows, illumination and geometry inference using hypergraphs. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35 (2), pp. 437–449. External Links: Cited by: §1.
-  (1984-01) Compositing digital images. siggraph 18 (3). Cited by: §3.
-  (2003) Detecting moving shadows: algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 25, pp. 918–923. Cited by: §1.
-  (2017) DeshadowNet: a multi-context embedding deep network for shadow removal. In cvpr, Cited by: §1, §1, §1, §2, §4.2, §4.2, Table 1.
-  (2013-12) Clustering-based shadow edge detection in a single color image. In International Conference on Mechatronic Sciences, Electric Engineering and Computer, pp. 1038–1041. External Links: Cited by: §1.
-  (2008-04) The shadow meets the mask: pyramid-based shadow removal. Computer Graphics Forum 27 (2), pp. 577–586. Cited by: §1, §2, §3.
-  (1996) Blue screen matting. In siggraph, Cited by: §3.
-  (2016) Shadow detection and removal for occluded object information recovery in urban high-resolution panchromatic satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 9, pp. 2568–2582. Cited by: §1.
-  (2019) Improving generalization and stability of generative adversarial networks. In International Conference on Learning Representations, Cited by: §1.
-  (2016) Noisy label recovery for shadow detection in unfamiliar domains. In cvpr, Cited by: Table 3.
-  (2018) Leave-one-out kernel optimization for shadow detection and removal. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (3), pp. 682–695. Cited by: §1.
-  (2016) Large-scale training of shadow detectors with noisily-annotated shadow examples. In eccv, Cited by: §1, §1.
-  (2014) Single image shadow removal via neighbor-based region relighting. In Proceedings of the European Conference on Computer Vision Workshops, Cited by: §1.
-  (2018) Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In cvpr, Cited by: §1, §1, §1, §2, Figure 5, §4.1, §4.2, §4.2, Table 1, Table 2, §5.
-  (2020) Instance shadow detection. CVPR. Cited by: §5.
-  (2001) Digital compositing for film and video. In Focal Press, Cited by: §3.
-  (2012) Strong shadow removal via patch-based shadow edge detection. 2012 IEEE International Conference on Robotics and Automation, pp. 2177–2182. Cited by: §2.
-  (2007-06) Natural shadow matting. ACM Trans. Graph. 26 (2). External Links: Cited by: §2.
-  (2005) A bayesian approach for shadow extraction from a single image. Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1 1, pp. 480–487 Vol. 1. Cited by: §2.
-  (2012) Shadow removal using bilateral filtering. IEEE Transactions on Image Processing 21, pp. 4361–4368. Cited by: §2, §4.2, Table 1.
-  (2017) DualGAN: unsupervised dual learning for image-to-image translation. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2868–2876. Cited by: §1.
RIS-gan: explore residual and illumination with generative adversarial networks for shadow removal.
AAAI Conference on Artificial Intelligence (AAAI), Cited by: §1, §2.
Improving shadow suppression for illumination robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 41, pp. 611–624. Cited by: §1.
-  (2019) Distraction-aware shadow detection. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5162–5171. Cited by: §1.
-  (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, Cited by: §1.
-  (2018) Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. In eccv, Cited by: §1, §3.2, §3, §4.2, §4.4, Table 3.