From Shadow Segmentation to Shadow Removal

08/01/2020 ∙ by Hieu Le, et al. ∙ 6

The requirement for paired shadow and shadow-free images limits the size and diversity of shadow removal datasets and hinders the possibility of training large-scale, robust shadow removal algorithms. We propose a shadow removal method that can be trained using only shadow and non-shadow patches cropped from the shadow images themselves. Our method is trained via an adversarial framework, following a physical model of shadow formation. Our central contribution is a set of physics-based constraints that enables this adversarial training. Our method achieves competitive shadow removal results compared to state-of-the-art methods that are trained with fully paired shadow and shadow-free images. The advantages of our training regime are even more pronounced in shadow removal for videos. Our method can be fine-tuned on a testing video with only the shadow masks generated by a pre-trained shadow detector and outperforms state-of-the-art methods on this challenging test. We illustrate the advantages of our method on our proposed video shadow removal dataset.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

page 11

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Shadows are present in most natural images. Shadow effects make objects harder to detect or segment [23], and scenes with shadows are harder to process and analyze [20]. Realistic shadow removal is an integral part of image editing [3]

and can greatly improve performance on various computer vision tasks

[32, 41, 56, 24, 21], getting increased attention in recent years [37, 13, 11]

. Data-driven approaches using deep learning models have achieved remarkable performance on shadow removal

[5, 22, 17, 15, 47, 55] thanks to recent large-scale datasets [45, 47].

Most of the current deep-learning shadow removal approaches are end-to-end mapping functions trained in a fully supervised manner. Such systems require pairs of shadow images and their shadow-free counter-parts as training signals. However, this type of data is cumbersome to obtain, lacks diversity, and is error-prone: all current shadow removal datasets exhibit color mismatches between the shadow images and their shadow-free ground truth (see Fig. 1 - left panel). Moreover, there are no images with self-cast shadows because the occluders are never visible in the image in the current data acquisition setups [47, 37, 15]. This dependency on paired data significantly hinders building large-scale, robust shadow-removal systems. A recent method trying to overcome this issue is MaskShadow-GAN [15], which learns shadow removal from unpaired shadow and shadow-free images. However, such cycle-GAN [58] based systems usually require enough statistical similarity between the two sets of images [25, 2]. This requirement can be hard to satisfy when capturing shadow-free images is tricky, such as shadow-free images of urban areas [4] or moving objects [18, 36].

Figure 1: Paired training data (left) consists of training examples {shadow, shadow-free} images which are expensive to collect, lack diversity, and are sensitive to errors due to possible color mismatches between the two images. Note the slightly different color tone between the two images. In this paper, we propose to learn shadow removal from unpaired shadow and non-shadow patches cropped from the same shadow image (right). This eliminates the need for shadow free images.

In this paper, we propose an alternative solution to the data dependency issue. We first observe that image patches alongside the shadow boundary contain critical information for shadow removal, including non-shadow, umbra and penumbra areas. They sufficiently reflect the characteristics of the shadowing effects, including the color differences between shadow and non-shadow areas as well as the gradual changes of the shadow effects across the shadow boundary [34, 33, 14]. If we further assume that the shadow effects are fairly consistent in the umbra areas, a patch-based shadow removal can be used to remove shadows in the whole image. Based on this observation, we propose training a patch-based shadow removal system for which we use unpaired shadow and non-shadow patches directly cropped from the shadow images themselves as training data. This approach eliminates the dependency on paired training data and opens up the possibility of handling different types of shadows, since it can be trained with any kind of shadow image. Compared to MaskShadow-GAN, shadow and non-shadow patches cropped from the same image naturally ensure significant statistical similarity. The only supervision required in this data processing scheme are the shadow masks, which are relatively easy to obtain, either manually, semi-interactively [45, 11], or automatically using shadow detection methods [5, 59, 57, 23]. Automatic shadow detection is improving, with the main challenge being generalization across datasets. At some point, one can expect to get very good shadow masks automatically, which would allows training our shadow removal method with very little annotation cost.

In particular, to obtain shadow and shadow-free patches, we crop the shadow images into small overlapping patches of size with a step size of . Based on the shadow masks, we group these patches into three sets: a non-shadow set () containing patches having no shadow pixels, a shadow-boundary set () containing patches lying on the shadow boundaries, and a full-shadow set () containing patches where all pixels are in shadow. With small enough patch size and step size , we can obtain enough training patches in each set. With this training set, we train a shadow removal system to learn a mapping from patches in the shadow-boundary set to patches in the non-shadow set . Essentially, this mapping needs to infer the color difference alongside the shadow edges, including the chromatic attributes of the light source and the smooth change of the shadow effects across the shadow boundary, in order to transform a shadow patch to a non-shadow patch. This is, in spirit, similar to early shadow removal approaches that focus on shadow edges to remove shadows [38, 9, 8, 44, 46].

By simply cropping shadow images into patches, we are posing the shadow removal as an unpaired image-to-image cross-domain mapping [54, 2, 29]

that can be estimated via an adversarial framework. In particular, we seek a mapping function

that takes as input a shadow-boundary patch from the set , and outputs an image patch , such that a critic function cannot distinguish whether was drawn from the non-shadow set or generated by . Note that one potential solution here is to use Cycle-GAN or MaskShadow-GAN to estimate this transformation. However, the mapping functions learned by these methods are not able to remove shadows from patches in the full-shadow set .

Training such an unpaired image-to-image mapping for shadow removal is challenging. The mapping is under-constrained and training can collapse easily. [12, 28, 27, 30, 42, 31]. Here, we propose to systematically constrain the shadow removal process by a physical model of shadow formation [39] and incorporate a number of physical properties of shadows into the framework. We show that these physics-based priors define a transformation closely modelling shadow removal. Driven by an adversarial signal, our framework effectively learns physically-plausible shadow removal without any direct supervision from paired data. Specifically, we constrain the shadow removal process to a shadow image decomposition model [22] that extracts a set of parameters and a matting layer from the shadow image. This set of shadow parameters is responsible for removing shadows on the umbra areas of the shadows via a linear function. Thus, once we estimate these shadow parameters from shadow boundary patches, we can use them to remove shadows for patches fully covered by the same shadow under the assumption that they share the same set of shadow parameters. Based on the physical properties of shadows, we apply the following constraints to the model:

  • We limit the search space of the shadow parameters and shadow matte to the appropriate value ranges that correspond to shadow removal.

  • Our matting and smoothness losses ensure that shadow removal only happens in the shadow areas and transitions smoothly across shadow boundaries.

  • Our boundary loss on the generated shadow-free image enforces color similarity between the inner and outer areas alongside shadow boundaries.

With these constraints and the adversarial signal, our method achieves shadow removal results that are competitive with state-of-the-art methods that were trained in a fully supervised manner with paired shadow and non-shadow images [22, 47, 37]. We further compare our method to state-of-the-art methods on a novel and challenging video shadow removal dataset including static videos with various scenes and shadow conditions. This test exposes the weaknesses of data-driven methods trained on datasets lacking diversity. Our patch-based method seems to generalize better than other methods when evaluated on this video shadow removal test. Most importantly, we can easily fine-tune our pre-trained model on a single testing video to further improve shadow removal results, showcasing this advantage of our training scheme.

In short, our contributions are:

  • We propose the use of an adversarial critic to train a shadow remover from unpaired shadow and non-shadow patches, providing an alternative solution to the paired data dependency issue.

  • We propose a set of physics-based constraints that define a transformation closely modelling shadow removal, which enables shadow remover training with only an adversarial training signal.

  • Our system trained without any shadow-free images has competitive results compared to fully-supervised state-of-the-art methods on the ISTD dataset.

  • We collect a novel video shadow removal dataset. Our shadow removal system can be fine-tuned for free to better remove shadows on testing videos.

2 Related Work

Shadows are physical phenomena. Early shadow removal works, without much training data, usually focused on studying different physical shadow properties [8, 7, 9, 6, 1, 10, 26, 53]. Many works look for cues to remove shadows starting from shadow edges. Finlayson et al.[9] used shadow edges to estimate a scaling factor that differentiates shadow areas from their non-shadow counterparts. Wu & Tang [52] imposed a smoothness constraint alongside the shadow boundaries to handle penumbra areas. Wu et al.[50] detected strong shadow-edges to remove shadows on the whole image. Shor & Lischinki [39] defined an affine relationship between shadow and non-shadow pixels where they used the areas surrounding the shadow edges to estimate the parameters of such affine transforms.

Shadow boundary effects can also be modeled via image matting [14]. Wu et al.[51]

estimated a matte layer representing the pixel-wise shadow probability to estimate a color transfer function to remove shadows. Chuang

et al[3] computed a shadow matte from video for shadow editing. They computed the lit and shadow images by finding min-max values at each pixel location throughout all frames of a video captured by a static camera. We use this technique to create a video dataset for testing shadow removal methods in Sec. 4.4.

Current shadow removal methods [22, 17, 55, 5, 47] use deep-learning models trained with full supervision on large-scale datasets [47, 37] of paired shadow and shadow-free images. Pairs are obtained by taking a photo with shadows, then removing the occluders from the scene to take the photo without shadows. Deshadow-Net [37] extracted multi-context features to predict a matte layer that removes shadows. Some works use adversarial frameworks to train their shadow removal. In [47] a unified adversarial framework predicted shadow masks and removed shadows. Similarly, Ding et al.[5] used an adversarial signal to improve shadow removal in an iterative manner. Note that these methods use the shadow-free image as the main training signal while our method is trained only through an adversarial loss. In prior work [22] we constrained shadow removal by a physical model of shadow formation. We trained networks to extract shadow parameters and a matte layer to remove shadows. We adapt this model to patch-based shadow removal. Note that in [22], all shadow parameters and matting layers were pre-computed using paired training images and the network was trained to simply regress those values, whereas our model automatically estimates them through adversarial training. MaskShadow-GAN [17] is the only deep-learning method that learns shadow removal from just unpaired training data.

3 Method

We describe our patch-based shadow removal in Sec. 3.1. Our whole image pipeline for shadow removal is described in Sec. 3.2. For both image-level and patch-level shadow removal, we use shadow matting [3, 35, 40, 49] to express a shadow-free image by:

(1)

with the shadow image, the matting layer, and the relit image. The relit image contains shadow pixels relit to their non-shadow values, computed via a linear function following a physical shadow formation model [22, 39]:

(2)

The unknown factors in this shadow matting formula are the set of shadow parameters which define the linear function that removes the shadow effects in the umbra areas of the shadow, and the matte layer that models the shadow effects on the shadow boundaries. We train a system of three networks to estimate these unknown factors via adversarial training. We use the annotated shadow segmentation masks for training. For testing, we obtain a segmentation mask for the image using the shadow detector proposed by Zhu et al[59].

3.1 Patch-based Shadow Removal

Figure 2: Weakly-supervised shadow decomposition. Our framework consists of three networks: Param-Net, Matte-Net, and D-Net. Param-Net and Matte-Net predict the shadow parameters and the matte layer respectively to jointly remove the shadow. Param-Net takes as input the input image patch and its shadow mask to predict three sets of shadow parameters for the three color channels, which is used to obtain a relit image. The input image patch, shadow mask, and relit image are input into Matte-Net to predict a matte layer. D-Net is the critic function distinguishing between the generated image patches and the real shadow-free patches. The only supervision signal is the set of shadow-free patches. The four losses guiding this training are the matting loss, smoothness loss, boundary loss, and adversarial loss.

Fig. 2 summarizes our framework to remove shadows from a single image patch, which consists of three networks: Param-Net, Matte-Net, and D-Net. Param-Net and Matte-Net predict the shadow parameters and the matte layer respectively to jointly remove shadows. D-Net is the critic distinguishing between the generated image patches and the real shadow-free patches. With Param-Net and Matte-Net being the generators and D-Net being the discriminator, the three networks form an adversarial training framework where the main source of training signal is the set of shadow-free patches.

In theory, as D-Net is trained to distinguish patches containing shadow boundaries from patches without any shadows, a natural solution to fool D-Net is to remove the shadows in the input shadow patches to make them indistinguishable from shadow-free patches. However, such an adversarial signal from D-Net alone often cannot guide the generators, (Param-Net and Matte-Net) to actually remove shadows. The parameter search space is very large and the mapping is extremely under-constrained. In practice, we observe that without any constraints, Param-Net tends to output consistently high values of as they would directly increase the overall brightness of the image patches, and Matte-Net tends to introduce artifacts similar to visual patterns frequently appearing in the non-shadow areas. Thus, our main idea is to constrain this framework with physical shadow properties. Constraining the output shadow parameters, shadow mattes, and combined shadow-free images, forces the networks to only transform the input images in a manner consistent with shadow removal.

First, Param-Net estimates a scaling factor and an additive constant , for each R,G,B color channel, to remove the shadow effects on the shadowed pixels in the umbra areas of the shadows via Eq. (2). Here we hypothesize that the main component that explains the shadow effects is the scaling factor . Accordingly, we bound its search space to the range . The minimum value of ensures that the transformation always scales up the values of the shadowed pixels. We set the search space for to the range where we choose a relatively small value of (the pixel intensity varies in the range [0,255]). Our intuition is to force the network to define the mapping mainly via the scaling factor . We choose . This upper bound of prevents the network from collapsing as increases. As we show in the ablation study, the network fails to learn a shadow removal without proper search space limitation.

Matte-Net estimates a blending layer that combines the shadow image patch and the relit image patch into a shadow-free image patch via Eq.1. The value of a pixel in the output image patch, , is computed as:

(3)

We map the output of Matte-Net to [0,1] as is being used as a matting layer and constrain the value of as follows:

  • If i indicates a non-shadow pixel, we enforce so that the value of the output pixel equals its value in the input image .

  • If i indicates a pixel in the umbra areas of the shadows, we enforce so that the value of the output pixel equals its relit value .

  • We do not control the value of in the penumbra areas of the shadows and rely on the training of the network to estimate these values.

where the umbra, non-shadow or penumbra areas can be roughly specified using the shadow mask. We define two areas alongside the shadow boundary, denoted as and - see Fig.3. is the area right outside the boundary, computed by subtracting the shadow mask, , from its dilated version . The inside area is computed similarly by subtracting an eroded shadow mask from the shadow mask. These two areas and roughly define a small area surrounding the shadow boundary, which can be considered as the penumbra area of the shadow. Then the above constraints are implemented as the matting loss computed by the following formula for every pixel :

(4)

Input Image Shadow Mask (green) & (red)

Figure 3: The penumbra area of the shadow. We define two areas alongside the shadow boundary, denoted as (shown in green) and (shown in red). These two areas roughly define a small region surrounding the shadow boundary, which can be considered as the penumbra area of the shadow.

Moreover, since the shadow effects are assumed to vary smoothly across the shadow boundaries, we enforce an smoothness loss on the spatial gradients of the matte layer, . This smoothness loss also prevents Matte-Net from producing undesired artifacts since it enforces local uniformity. This loss is:

(5)

Then, given a set of estimated parameters and a matte layer , we obtain an output image via the image decomposition formula (1). We penalize the difference between the average intensity of pixels lying right outside and inside the shadow boundary, which are the two areas and . This shadow boundary loss is computed by:

(6)

Last, we compute the adversarial loss with the feedback from D-Net:

(7)

where denotes the output of D-Net.

The final objective function to train Param-Net and Matte-Net is to minimize a weighted sum of the above losses:

(8)

All these losses are essential for training our networks, as shown in our ablation study in Sec. 4.3. By using all the proposed losses together, our method is able to automatically extract a set of shadow parameters and an layer from an input image patch. Fig. 4 visualizes the components extracted from our framework for two challenging input patches. In the first row, a dark shadow area is lit correctly to its non-shadow value. In the second row, the matte layer is not affected by the dark material of the surface.

Figure 4: Weakly-supervised shadow image decomposition. With only shadow mask supervision, our method automatically learns to decompose the shadow effect in the input image patch into a matte layer and a relit image . The matte layer combines and to obtain a shadow-free image patch via Eq. (1).

3.2 Image Shadow Removal using a patch-based model.

We estimate a set of shadow parameters and a matte layer for the input image to remove shadows via Eq. (1). First, we obtain a shadow mask using the shadow detector of Zhu et al[59]. We crop the input shadow image into overlapping patches. All patches containing the shadow boundaries are then input into the three networks. We approximate the whole image shadow parameters from the patch shadow parameters, under the assumption that they share the same or very similar parameters. We simply compute the image shadow parameters as a linear combination of the patch shadow parameters. Similarly, we compute the values of each pixel in the matte layer by combining the overlapping matte patches. We set the matte layer pixels in the non-shadow area to and those in the umbra area to . We observe that the classification scores obtained from the critic function D-Net correlate with the quality of the generated image patches. Thus, we normalize these scores to sum to 1 and use them as coefficients for the linear combinations that form the image shadow parameters and matte layer.

4 Experiments

4.1 Network Architectures and Implementation Details.

We use a VGG-19 architecture for Param-Net and a U-Net architecture for Matte-Net. D-Net is a simple 5-layer convolutional network. To map the outputs of the networks to a certain range, we use Tanh functions together with scaling and additive constants. We use stochastic gradient descent with the Adam solver

[19] to train our model. The initial learning rate for Matte-Net and D-Net is 0.0002 and for Param-Net is 0.00002. All networks were trained from scratch. We experimentally set our training parameters () to

. We train our network with batch size 96 for 150 epochs.

111All code, trained models, and data are available at: https://www3.cs.stonybrook.edu/~cvl/projects/FSS2SR/index.html

We use the ISTD dataset [47] for training. Each original training image of size is cropped into patches of size with a step size of 32. This creates 311,220 image patches from 1,330 training shadow images. This training set includes 151,327 non-shadow patches, 147,312 shadow-boundary patches, and 12,581 full-shadow patches.

4.2 Shadow Removal Evaluation

We first evaluate our method on the adjusted testing set of the ISTD dataset [47, 22]. Following previous work [47, 14, 37, 22], we compute the root-mean-square-error (RMSE) in the LAB color space on the shadow area, non-shadow area, and the whole image, where all shadow removal results are re-sized to . Note that our method can take any size image as input. We used the Zhu et al[59] shadow detector, pre-trained on the SBU dataset and fine-tuned on the ISTD dataset, to obtain the shadow masks for our testing, as in [22].

Methods Training Data Shadow Non-Shadow All
Input Image - 40.2 2.6 8.5
Yang et al[53] - 24.7 14.4 16.0
Guo et al[14] Shd. Free + Shd. Mask 22.0 3.1 6.1
Gong et al[11] - 13.3 - -
ST-CGAN [47] Shd. Free + Shd. Mask 13.4 7.7 8.7
DeshadowNet [37] Shd. Free 15.9 6.0 7.6
MaskShadow-GAN [15] Shd. Free (Unpaired) 12.4 4.0 5.3
SP+M-Net [22] Shd. Free + Shd.Mask 7.9 3.1 3.9
Ours Shd. Mask 9.7 3.0 4.0
Table 1: Shadow removal results of our networks compared to state-of-the-art shadow removal methods on the adjusted ISTD testing set [22, 47]. The metric is RMSE (the lower, the better). Best results are in bold.

In Table 1, we compare our weakly-supervised methods with the recent state-of-the-art methods of Guo et al[14], Gong et al[11], Yang et al[53], ST-CGAN [47], DeshadowNet [37], MaskShadow-GAN [15], and SP+M-Net [22]. The second column shows the training data of each method. All other deep-learning methods require paired shadow-free images as training signal except MaskShadow-GAN, which is trained on unpaired shadow and shadow-free images from the ISTD dataset. ST-CGAN and SP+M-Net also require the training shadow masks. Our method, trained without any shadow-free image, got 9.7 RMSE on the shadow areas, which is competitive with SP+M-Net. However, SP+M-Net requires full supervision.

Our method outperforms MaskShadow-GAN by 22%, reducing the RMSE in the shadow area from 12.4 to 9.7 while also achieving lower RMSE on the non-shadow area. We outperform DeshadowNet and ST-CGAN, two methods that were trained with paired shadow and shadow-free images, reducing the RMSE by 38% and 26% respectively.

Figure 5: Comparison of shadow removal on ISTD dataset. Qualitative comparison between our method and the state-of-the-art methods: ST-CGAN [47], MaskShadow-GAN [15], SP+M-Net[22]. Our method, trained without any shadow-free images, produces clean shadow-free images with very few artifacts.

Fig. 5 compares qualitative shadow removal results from our method with other state-of-the-art methods on the ISTD dataset. Our method, trained with just an adversarial signal, produces clean shadow-free images with very few artifacts. On the other hand, ST-CGAN and MaskShadow-GAN tend to produce blurry images, introduce artifacts, and often relight the wrong image parts. Our method generates images which are visually similar to that of SP+M-Net. While SP+M-Net is less affected by the error in the shadow masks (shown in the 2nd row), our method generates images with more consistent colors between areas inside and outside the shadow boundaries (3rd and 4th rows). In all cases, our method preserves almost perfectly the textures beneath the shadows (last row).

4.3 Ablation Studies

We conduct ablation studies to better understand the effects of each proposed component in our framework. Starting from the original model with all the proposed features and losses, we train new models removing the proposed components one at a time. Table 2 summarizes these experiments. The first row shows the results of our model when we set the search space of the scaling factor to and the search space of the additive constant to . In this case, the model collapses and consistently outputs uniformly dark images. Similarly, the model collapses when we omit the boundary loss . We observe that this loss is essential in stabilizing the training as it prevents the Param-Net from outputting consistently high values.

The matting loss and loss are critical for learning proper shadow removal. We observe that without the matting loss

, the model behaves similarly to an image inpainting model where it tends to modify all parts of the images to fool the discriminator. Last, dropping the smoothness loss

only results in a slight drop in shadow removal performance, from 9.7 to 10.2 RMSE on the shadow areas. However, we observe more visible boundary artifacts on the output images without this loss.

Methods Shadow Non-Shadow All
Input Image 40.2 2.6 8.5
Ours w/o limiting search space 47.5 2.9 9.9
Ours w/o 41.7 3.9 9.8
Ours w/o 38.7 3.1 9.0
Ours w/o 10.2 2.8 4.0
Ours w/o 26.9 2.9 6.8
Ours 9.7 3.0 4.0
Table 2: Ablation Studies. We train our network without a certain loss or feature and report the shadow removal performances on the ISTD dataset [47]. The metric is RMSE (the lower, the better). The table shows that all the proposed features in our model are essential in training for shadow removal.

4.4 Video Shadow Removal

Video Shadow Removal is challenging for shadow removal methods. A video sequence has hundreds of frames with changing shadows. It is even harder for videos with a moving camera, moving objects, and illumination changes.

To better evaluate the performance of shadow removal methods in videos, we collected a set of 8 videos, each containing a static scene without visible moving objects. We cropped those videos to obtain clips with the only dominant motions caused by the shadows (either by direct light motion or motion of the unseen occluders). As can be seen from the top row of Fig. 6, the dataset includes videos containing shadows cast by close-up occluders, far distance occluders, videos with simple-to-complex shadows, and shadows on various types of backgrounds and materials. Inspired by [3], we propose a “max-min” technique to obtain a single pseudo shadow-free frame for each video: since the camera is static and there is no visible moving object in the frames, the changes in the video are caused by the moving shadows. We first obtain two images and by taking the maximum and minimum intensity values at each pixel location across the whole video. is then the image that contains the shadow-free values of pixels if they ever go out of the shadows. Similarly, their shadowed values, if they ever go into the shadows, are captured in . Fig. 6 shows these two images for a video named “plant”. From these two images, we can trivially obtain a mask, namely moving-shadow , marking the pixels appearing in both the shadow and non-shadow areas in the video:

(9)

where we set a small threshold of . This method allows us to obtain pairs of shadow and non-shadow pixel values in the moving-shadow mask, , for free.

Figure 6: Examples of Video Shadow Removal dataset. The dataset consists of videos where both the scene and the visible objects remaining static. The top row shows frames of different videos in our dataset. The second row visualizes our method to obtain the shadow-free frames for evaluating shadow removal.

To measure shadow removal performance, we input the frames of these videos into the shadow removal algorithm and measure the RMSE on the LAB color channel between the output frame and the image on the moving-shadow area . We compute RMSE on each video and take their average to measure the shadow removal performance on the whole dataset. Table 3 summarizes the performance of our methods compared to MaskShadow-GAN[15] and SP+M-Net[22] on these videos. Our method outperforms SP+M-Net and MaskShadow-GAN, reducing the RMSE by 5% and 11% respectively. As our method only needs shadow segmentation masks for training, we use a pre-trained shadow detection model [59] to obtain a set of shadow masks for each video. While these shadow mask sets are imperfect, fine-tuning our model using this free supervision results in a 10% error reduction, showing the advantage of our training scheme. Fig. 7 visualizes two example shadow removal results for different methods. We show a single input frame of each video. From left to right are the input frame, the shadow removal results of MaskShadow-GAN [15], the results of SP+M-Net [22], the results of our model trained on the ISTD dataset, and the result of our model fine-tuned with each testing video for 1 epoch. The top row shows an example where all methods perform relatively well. Our method seems to have better color balance between the relit pixels and the non-shadow pixels, although there is a visible boundary artifact due to imperfect shadow masks. After 1 epoch of fine-tuning, these artifacts are greatly suppressed. The bottom row shows a challenging case where all methods fail to remove shadows properly.

Methods Input Frame [15] [22] Ours Ours+
RMSE 32.9 23.5 22.2 20.9 18.0
Table 3: Shadow removal results on our proposed Video Shadow Removal dataset. The metric is RMSE (the lower, the better), compared to the pseudo shadow-free frame on the moving shadow mask. All methods were pre-trained on the ISTD dataset. Ours+ denotes our model fine-tuned for one epoch on each video using the shadow masks generated by a shadow detector [59] pre-trained on the SBU dataset[43]
Figure 7: Shadow Removal on Videos.We visualize the shadow removal results of different methods on two frames extracted from our video dataset. “Ours+” denotes the results of our model fine-tuned with each testing video for 1 epoch. Top row shows an example where all methods perform relatively well. Bottom row shows a challenging case where all methods fail to remove shadow properly.

5 Conclusion

We presented a novel patch-based deep-learning model to remove shadows from images. This method can be trained on patches cropped directly from the shadow images, using the shadow segmentation mask as the only supervision signal. This obviates the dependency on paired training data and allows us to train this system on any kind of shadow image. The main contribution of this paper is a set of physics-based constrains that enable the training of this mapping. We have illustrated the effectiveness of our method on the standard ISTD dataset [47] and on our novel Video Shadow Removal dataset. As shadow detection methods mature with the aid of recently proposed shadow detection datasets [48, 16], our method can be trained to remove shadows for a very low annotation cost.

Acknowledgements. This work was partially supported by the Partner University Fund, the SUNY2020 ITSC, and a gift from Adobe. Computational support provided by IACS and a GPU donation from NVIDIA. We thank Kumara Kahatapitiya and Cristina Mata for assistance with the manuscript.

References

  • [1] E. Arbel and H. Hel-Or (2011) Shadow removal using intensity surfaces and texture anchor points. IEEE Transactions on Pattern Analysis and Machine Intelligence 33, pp. 1202–1216. Cited by: §2.
  • [2] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2017)

    StarGAN: unified generative adversarial networks for multi-domain image-to-image translation

    .

    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

    , pp. 8789–8797.
    Cited by: §1, §1.
  • [3] Y. Chuang, D. B. Goldman, B. Curless, D. H. Salesin, and R. Szeliski (2003-07) Shadow matting and compositing. ACM Transactions on Graphics 22 (3), pp. 494–500. Note: Sepcial Issue of the SIGGRAPH 2003 Proceedings Cited by: §1, §2, §3, §4.4.
  • [4] P. Dare (2005-02) Shadow analysis in high-resolution satellite imagery of urban areas. Photogrammetric Engineering and Remote Sensing 71, pp. 169–177. External Links: Document Cited by: §1.
  • [5] B. Ding, C. Long, L. Zhang, and C. Xiao (2019) ARGAN: attentive recurrent generative adversarial network for shadow detection and removal. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10212–10221. Cited by: §1, §1, §2.
  • [6] M. S. Drew (2003) Recovery of chromaticity image free from shadows via illumination invariance. In In IEEE Workshop on Color and Photometric Methods in Computer Vision, ICCV’03, pp. 32–39. Cited by: §2.
  • [7] G. Finlayson and M. S. Drew (2001-07) 4-sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities. In iccv, Vol. 2, pp. 473–480 vol.2. External Links: Document, ISSN Cited by: §2.
  • [8] G. Finlayson, S.D. Hordley, C. Lu, and M.S. Drew (2006) On the removal of shadows from images. pami. Cited by: §1, §2.
  • [9] G. Finlayson, S. D. Hordley, and M. S. Drew (2002) Removing shadows from images. In eccv, ECCV ’02, London, UK, UK, pp. 823–836. External Links: ISBN 3-540-43748-7, Link Cited by: §1, §2.
  • [10] C. Fredembach and G. D. Finlayson (2005) Hamiltonian path based shadow removal. In BMVC, Cited by: §2.
  • [11] H. Gong and D. Cosker (2016) Interactive removal and ground truth for difficult shadow scenes. J. Opt. Soc. Am. A 33 (9), pp. 1798–1811. External Links: Link, Document Cited by: §1, §1, §4.2, Table 1.
  • [12] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in Neural Information Processing Systems, pp. 5767–5777. Cited by: §1.
  • [13] R. Guo, Q. Dai, and D. Hoiem (2011) Single-image shadow detection and removal using paired regions. In cvpr, Cited by: §1.
  • [14] R. Guo, Q. Dai, and D. Hoiem (2012) Paired regions for shadow detection and removal. pami. Cited by: §1, §2, §4.2, §4.2, Table 1.
  • [15] X. Hu, Y. Jiang, C. Fu, and P. Heng (2019) Mask-ShadowGAN: learning to remove shadows from unpaired data. In ICCV, Note: to appear Cited by: §1, §1, Figure 5, §4.2, §4.4, Table 1, Table 3.
  • [16] X. Hu, T. Wang, C. Fu, Y. Jiang, Q. Wang, and P. Heng (2019) Revisiting shadow detection: a new benchmark dataset for complex world. ArXiv abs/1911.06998. Cited by: §5.
  • [17] X. Hu, L. Zhu, C. Fu, J. Qin, and P. Heng (2018) Direction-aware spatial context features for shadow detection. In cvpr, Cited by: §1, §2.
  • [18] P. KaewTrakulPong and R. Bowden (2002) An improved adaptive background mixture model for real- time tracking with shadow detection. Cited by: §1.
  • [19] D. P. Kingma and J. Ba (2015) Adam: A method for stochastic optimization. In iclr, Cited by: §4.1.
  • [20] H. Le, B. Goncalves, D. Samaras, and H. Lynch (2019-06) Weakly labeling the antarctic: the penguin colony case. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Cited by: §1.
  • [21] H. Le, V. Nguyen, C. Yu, and D. Samaras (2016) Geodesic distance histogram feature for video segmentation. In ACCV, Cited by: §1.
  • [22] H. Le and D. Samaras (2019) Shadow removal via shadow image decomposition. In iccv, Cited by: §1, §1, §1, §2, §3, Figure 5, §4.2, §4.2, §4.4, Table 1, Table 3.
  • [23] H. Le, T. F. Y. Vicente, V. Nguyen, M. Hoai, and D. Samaras (2018) A+D Net: training a shadow detector with adversarial shadow attenuation. In eccv, Cited by: §1, §1.
  • [24] Le,Hieu, Yu,Chen-Ping, Zelinsky,Gregory, and Samaras,Dimitris (2017) Co-localization with category-consistent features and geodesic distance propagation. In ICCV 2017 Workshop on CEFRL: Compact and Efficient Feature Representation and Learning in Computer Vision, Cited by: §1.
  • [25] Y. Li, S. Tang, R. Zhang, Y. Zhang, J. Li, and S. Yan (2019) Asymmetric gan for unpaired image-to-image translation. IEEE Transactions on Image Processing 28, pp. 5881–5896. Cited by: §1.
  • [26] F. Liu and M. Gleicher (2008) Texture-consistent shadow removal. In ECCV, Cited by: §2.
  • [27] H. Liu, X. Gu, and D. Samaras (2019-10) Wasserstein gan with quadratic transport cost. In The IEEE International Conference on Computer Vision (ICCV), Cited by: §1.
  • [28] H. Liu, G. Xianfeng, and D. Samaras (2018) A two-step computation of the exact gan wasserstein distance. In

    International Conference on Machine Learning

    ,
    pp. 3165–3174. Cited by: §1.
  • [29] M. Liu, T. Breuel, and J. Kautz (2017) Unsupervised image-to-image translation networks. ArXiv abs/1703.00848. Cited by: §1.
  • [30] L. Mescheder, S. Nowozin, and A. Geiger (2018) Which training methods for gans do actually converge?. In International Conference on Machine Learning, Cited by: §1.
  • [31] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. In International Conference on Machine Learning, Cited by: §1.
  • [32] T. Müller and B. Erdnüeß (2019) Brightness correction and shadow removal for video change detection with uavs. In Defense + Commercial Sensing, Cited by: §1.
  • [33] A. Panagopoulos, C. Wang, D. Samaras, and N. Paragios (2010) Estimating shadows with the bright channel cue. CRICV. Cited by: §1.
  • [34] A. Panagopoulos, C. Wang, D. Samaras, and N. Paragios (2013) Simultaneous cast shadows, illumination and geometry inference using hypergraphs. Pattern Analysis and Machine Intelligence, IEEE Transactions on 35 (2), pp. 437–449. External Links: Document, ISSN 0162-8828 Cited by: §1.
  • [35] T. Porter and T. Duff (1984-01) Compositing digital images. siggraph 18 (3). Cited by: §3.
  • [36] A. Prati, I. Mikic, M. M. Trivedi, and R. Cucchiara (2003) Detecting moving shadows: algorithms and evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 25, pp. 918–923. Cited by: §1.
  • [37] L. Qu, J. Tian, S. He, Y. Tang, and R. W. H. Lau (2017) DeshadowNet: a multi-context embedding deep network for shadow removal. In cvpr, Cited by: §1, §1, §1, §2, §4.2, §4.2, Table 1.
  • [38] W. Shiting and Z. Hong (2013-12) Clustering-based shadow edge detection in a single color image. In International Conference on Mechatronic Sciences, Electric Engineering and Computer, pp. 1038–1041. External Links: Document Cited by: §1.
  • [39] Y. Shor and D. Lischinski (2008-04) The shadow meets the mask: pyramid-based shadow removal. Computer Graphics Forum 27 (2), pp. 577–586. Cited by: §1, §2, §3.
  • [40] A. R. Smith and J. F. Blinn (1996) Blue screen matting. In siggraph, Cited by: §3.
  • [41] N. Su, Y. Zhang, S. Tian, Y. Yan, and X. Miao (2016) Shadow detection and removal for occluded object information recovery in urban high-resolution panchromatic satellite images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 9, pp. 2568–2582. Cited by: §1.
  • [42] H. Thanh-Tung, T. Tran, and S. Venkatesh (2019) Improving generalization and stability of generative adversarial networks. In International Conference on Learning Representations, Cited by: §1.
  • [43] T. F. Y. Vicente, M. Hoai, and D. Samaras (2016) Noisy label recovery for shadow detection in unfamiliar domains. In cvpr, Cited by: Table 3.
  • [44] T. F. Y. Vicente, M. Hoai, and D. Samaras (2018) Leave-one-out kernel optimization for shadow detection and removal. IEEE Transactions on Pattern Analysis and Machine Intelligence 40 (3), pp. 682–695. Cited by: §1.
  • [45] T. F. Y. Vicente, L. Hou, C. Yu, M. Hoai, and D. Samaras (2016) Large-scale training of shadow detectors with noisily-annotated shadow examples. In eccv, Cited by: §1, §1.
  • [46] T. F. Y. Vicente and Samaras,Dimitris (2014) Single image shadow removal via neighbor-based region relighting. In Proceedings of the European Conference on Computer Vision Workshops, Cited by: §1.
  • [47] J. Wang, X. Li, and J. Yang (2018) Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In cvpr, Cited by: §1, §1, §1, §2, Figure 5, §4.1, §4.2, §4.2, Table 1, Table 2, §5.
  • [48] T. Wang, X. Hu, Q. Wang, P. Heng, and C. Fu (2020) Instance shadow detection. CVPR. Cited by: §5.
  • [49] S. Wright (2001) Digital compositing for film and video. In Focal Press, Cited by: §3.
  • [50] Q. Wu, W. Zhang, and B. V. K. V. Kumar (2012) Strong shadow removal via patch-based shadow edge detection. 2012 IEEE International Conference on Robotics and Automation, pp. 2177–2182. Cited by: §2.
  • [51] T. Wu, C. Tang, M. S. Brown, and H. Shum (2007-06) Natural shadow matting. ACM Trans. Graph. 26 (2). External Links: ISSN 0730-0301, Link, Document Cited by: §2.
  • [52] T. Wu and C. Tang (2005) A bayesian approach for shadow extraction from a single image. Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1 1, pp. 480–487 Vol. 1. Cited by: §2.
  • [53] Q. Yang, K. H. Tan, and N. Ahuja (2012) Shadow removal using bilateral filtering. IEEE Transactions on Image Processing 21, pp. 4361–4368. Cited by: §2, §4.2, Table 1.
  • [54] Z. Yi, H. Zhang, P. Tan, and M. Gong (2017) DualGAN: unsupervised dual learning for image-to-image translation. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2868–2876. Cited by: §1.
  • [55] L. Zhang, C. Long, X. Zhang, and C. Xiao (2020) RIS-gan: explore residual and illumination with generative adversarial networks for shadow removal. In

    AAAI Conference on Artificial Intelligence (AAAI)

    ,
    Cited by: §1, §2.
  • [56] W. Zhang, X. Zhao, J. Morvan, and L. Chen (2019)

    Improving shadow suppression for illumination robust face recognition

    .
    IEEE Transactions on Pattern Analysis and Machine Intelligence 41, pp. 611–624. Cited by: §1.
  • [57] Q. Zheng, X. Qiao, Y. Cao, and R. W. H. Lau (2019) Distraction-aware shadow detection. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5162–5171. Cited by: §1.
  • [58] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, Cited by: §1.
  • [59] L. Zhu, Z. Deng, X. Hu, C. Fu, X. Xu, J. Qin, and P. Heng (2018) Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. In eccv, Cited by: §1, §3.2, §3, §4.2, §4.4, Table 3.