Temporal Feature Warping for Video Shadow Detection

07/29/2021
by   Shilin Hu, et al.
0

While single image shadow detection has been improving rapidly in recent years, video shadow detection remains a challenging task due to data scarcity and the difficulty in modelling temporal consistency. The current video shadow detection method achieves this goal via co-attention, which mostly exploits information that is temporally coherent but is not robust in detecting moving shadows and small shadow regions. In this paper, we propose a simple but powerful method to better aggregate information temporally. We use an optical flow based warping module to align and then combine features between frames. We apply this warping module across multiple deep-network layers to retrieve information from neighboring frames including both local details and high-level semantic information. We train and test our framework on the ViSha dataset. Experimental results show that our model outperforms the state-of-the-art video shadow detection method by 28

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 4

page 7

page 8

page 9

04/08/2021

Progressive Temporal Feature Alignment Network for Video Inpainting

Video inpainting aims to fill spatio-temporal "corrupted" regions with p...
03/11/2021

Triple-cooperative Video Shadow Detection

Shadow detection in a single image has received significant research int...
08/13/2019

Frame-to-Frame Aggregation of Active Regions in Web Videos for Weakly Supervised Semantic Segmentation

When a deep neural network is trained on data with only image-level labe...
11/20/2021

FlowVOS: Weakly-Supervised Visual Warping for Detail-Preserving and Temporally Consistent Single-Shot Video Object Segmentation

We consider the task of semi-supervised video object segmentation (VOS)....
09/10/2021

Automatic Portrait Video Matting via Context Motion Network

Automatic portrait video matting is an under-constrained problem. Most s...
04/03/2020

Temporally Distributed Networks for Fast Video Segmentation

We present TDNet, a temporally distributed network designed for fast and...
04/14/2021

Temporally-Aware Feature Pooling for Action Spotting in Soccer Broadcasts

Toward the goal of automatic production for sports broadcasts, a paramou...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Shadows appear in most natural images. Simply detecting shadows benefits many computer vision tasks such as image classification

[Filippi and İnci Güneralp(2013)], image segmentation [Xu et al.(2019)Xu, Chen, Su, Ji, Xu, Memon, and Zhou] and object tracking [Saravanakumar et al.(2010)Saravanakumar, Vadivel, and Saneem Ahmed, Mohanapriya and Mahesh(2017)]

. Therefore, shadow detection has drawn a lot of interest in recent years especially with the rapid development of deep-learning-based methods

[Nguyen et al.(2017)Nguyen, Yago Vicente, Zhao, Hoai, and Samaras, Zheng et al.(2019)Zheng, Qiao, Cao, and Lau, Wang et al.(2018)Wang, Li, and Yang, Ding et al.(2019)Ding, Long, Zhang, and Xiao, Le et al.(2018)Le, Vicente, Nguyen, Hoai, and Samaras]. However, recent shadow detection works mostly deal with shadows in single images while video shadow detection remains an open question despite many potential applications [Wang et al.(2020)Wang, Curless, and Seitz, Le and Samaras(2020a), Le and Samaras(2020b), Le et al.(2016)Le, Nguyen, Yu, and Samaras].

A shadow video typically consists of hundreds of frames that contain shadows varying in shapes and intensities. The detection problem is compounded by video-specific issues such as motion blur. Thus, simply applying image shadow detection methods frame-by-frame on video often yields inconsistent predictions (see Fig 1.d). Instead, a common strategy to deal with video data is to leverage temporal information across video frames [Zhu et al.(2017)Zhu, Xiong, Dai, Yuan, and Wei, Yan et al.(2019)Yan, Li, Xie, Li, Wang, Chen, and Lin]. Here the difficulty lies in how to incorporate information across video where there is spatial misalignment between frames due to the movements of objects and cameras.

In this paper, we propose a straightforward but powerful deep-learning based method to obtain a rich feature representation for video shadow detection. We focus on dealing with the temporal misalignment between frame representations. Our strategy is simple: we align the features across frames by optical flow and then linearly combine them to obtain the per-frame final feature representation. Optical flow is easy to obtain [Liu et al.(2020)Liu, Zhang, He, Liu, Wang, Tai, Luo, Wang, Li, and Huang] and can effectively align spatial content, including small details. As this warping is computationally efficient, we can apply it on multiple layers of our network.

We report that this simple optical-flow based feature aggregation scheme works surprisingly well for shadow detection in videos. We train our model in an end-to-end fashion on the Visha dataset [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin]. Our method achieves state-of-the-art video shadow detection performance, outperforming the previous method [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin] by a 28% BER reduction. Fig 1 illustrates the effect of our proposed temporal feature warping method. As can be seen, the features of two visually similar frames could be wildly different (column c), which results in inconsistent outputs (column d). Our method warps and then combines the features, making them consistent across frames (column e), and finally outputs stable and temporally consistent results (last column).

Figure 1: Motivation of our work. (a) shows a pair of consecutive video frames, (b) shows the ground truth shadow masks, (c) shows the original extracted high level feature maps. Even though the difference between images is small, the difference between features is high, (d) shows shadow mask prediction using original features, (e) is combined features obtained by our method, (f) shows shadow prediction using combined features. Here we demonstrate the effectiveness of our method; the combined feature maps preserve correspondence between frames and produce more stable predictions.

2 Related Work

Single image shadow detection is a well studied topic. On one hand, earlier research on image shadow detection mostly focuses on spectral or spatial features of images such as chromaticity, physical properties, geometry, and texture [Sanin et al.(2012)Sanin, Sanderson, and Lovell]. On the other hand, recent shadow detection methods show tremendous success with the rapid development of deep learning. Le et al[Le et al.(2018)Le, Vicente, Nguyen, Hoai, and Samaras] propose to train a shadow detection network together with a shadow attenuation network that generates adversarial training examples. Hu et al[Hu et al.(2018)Hu, Zhu, Fu, Qin, and Heng] propose a directional-aware feature extractor for aggregating spatial information. Zhu et al[Zhu et al.(2018)Zhu, Deng, Hu, Fu, Xu, Qin, and Heng] utilize recurrent attention residual modules to fully aggregate the global and local contexts in different layers of the CNN to detect shadows. Chen et al[Chen et al.(2020)Chen, Zhu, Wan, Wang, Feng, and Heng] further improve detection performance by introducing a multi-task mean teacher architecture which leverages unlabeled data. However, image methods trained on image datasets such as SBU [Vicente et al.(2016b)Vicente, Hou, Yu, Hoai, and Samaras] and CUHK-Shadow [Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng] do not generalize well to videos due to the lack of temporal consistency.

Video shadow detection is a classic problem on its own. Earlier work [Jacques et al.(2005)Jacques, Jung, and Musse, Shi and Liu(2019)] focuses on the spectral and spatial features, which depend heavily on the quality of data. Without temporal constraints, these methods often output inconsistent predictions across frames. The first large-scale video shadow detection dataset was proposed by Chen et al[Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin]. The dataset contains 120 fully-annotated videos with a total of 11,685 frames. They also proposed the first deep-learning-based method for video shadow detection in which a dual gated co-attention module is used to focus on common high-level features between frames. This co-attention module allows their network to filter out temporally inconsistent information from each frame representation to obtain more stable and consistent results. However, this mechanism causes the method to be less sensitive to shadow areas that substantially change across frames due to temporal misalignment. By contrast, our method performs temporal alignment before combining features. Besides, our temporal alignment module can be used for all layers of the network, allowing us to pick up even small shadow areas. Co-attention can only be applied on high-level feature maps due to its computational cost.

Optical flow is used in various high-level video tasks [Shin et al.(2005)Shin, Kim, Kang, Lee, Paik, Abidi, and Abidi, Zhong et al.(2013)Zhong, Liu, Ren, Zhang, and Ren, Buades et al.(2016)Buades, Lisani, and Miladinović]. Recent deep learning based methods [Ilg et al.(2017)Ilg, Mayer, Saikia, Keuper, Dosovitskiy, and Brox, Sun et al.(2018)Sun, Yang, Liu, and Kautz]

for optical flow estimation are fairly accurate and efficient in inference. However, most popular datasets used in training optical flow estimation models,

e.g, MPI Sintel [Butler et al.(2012)Butler, Wulff, Stanley, and Black] and KITTI 2015 [Menze and Geiger(2015)], are sufficiently different from shadow detection datasets [Wang et al.(2018)Wang, Li, and Yang, Vicente et al.(2016a)Vicente, Hoai, and Samaras, Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng]. To compensate for this domain shift, we use a simple module to refine optical flow before using it to warp our features. Liu et al[Liu et al.(2020)Liu, Zhang, He, Liu, Wang, Tai, Luo, Wang, Li, and Huang] propose an unsupervised dense optical flow estimation network with better cross dataset generalization capability by learning from abundant augmentations of training data.

3 Method

3.1 Overview

The overall structure of our framework is illustrated in Fig 2. Our model consists of two branches with identical architectures. The input of our model is a RGB video frame pair. The two images are input to the two branches of the model to extract two sets of feature maps across the three different layers of the network. Throughout the network, these features are progressively enriched by the information from the features of the other image via temporal warping and linear combination.

In particular, we first obtain two dense optical flow fields from the two images using ARFlow [Liu et al.(2020)Liu, Zhang, He, Liu, Wang, Tai, Luo, Wang, Li, and Huang]. Following [Gadde et al.(2017)Gadde, Jampani, and Gehler, Li et al.(2021)Li, Zhao, He, Zhu, and Liu], we train a small module to refine these optical flow fields to better suit the feature warping task, depicted as FlowCNN in Fig 2. At multiple layers of the network, we combine the features of each branch with the aligned features from the other. A flow-guided warp (FGwarp) module is used to first warp the frame feature representation to spatially align it with the content of the other frame and then linearly combine the original features with the warped features from the other frame. This simple feature aggregation scheme ensures consistency between the two frame representations. The combined features of different levels are input through a shadow refinement module to generate the final shadow mask prediction.

Figure 2: Simplified network architecture. Our network consists of two branches with identical architecture. Each branch predicts the shadow mask for the corresponding input frame. Here we only show the data flow from the “Frame t” branch to the “Frame t+k” branch for simplicity. In practice, feature warping and combination goes both ways. At multiple layers of the network, we feed the outputs of both branches into a flow-guided warp (FGwarp) module (details in Fig.3) to obtain a temporally enriched feature (colored as orange). We use a shadow feature refinement module to predict the shadow mask from all combined features.

3.2 Shadow detector network

Our network consists of two branches with identical architecture. For each branch, we use the MobileNet V2 [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] as the backbone feature extractor. Each branch consists of a series of inverted residual bottlenecks (IRBs) [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen]. We input the feature maps of the , the and the last block of each branch, which encode the low, mid, and high-level features respectively, to the FGwarp module to obtain the corresponding combined features. Finally, these three combined feature maps are input to a detail enhancement module [Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng] to refine the features and predict the final shadow mask.

3.3 Optical flow estimation

We use a pre-trained ARFlow [Liu et al.(2020)Liu, Zhang, He, Liu, Wang, Tai, Luo, Wang, Li, and Huang] model to produce optical flow for our two input frames. This network is trained on MPI Sintel [Butler et al.(2012)Butler, Wulff, Stanley, and Black]

which differs in object types and occlusions from our shadow video data. Thus, we train a flow refinement module, FlowCNN, to refine (by adapting the domain) the output of ARFlow to better fit the feature warping task. The input for the FlowCNN consists of the optical flow from ARFlow, the two input frames, and the pixel-wise difference of the two frames. The network consists of 4 convolutional layers, in which the first two layers are followed by a BatchNorm and a ReLU layer. The output of the third layer is then concatenated with the original flow and passed to the last convolution layer to obtain the refined optical flow. We train our proposed FlowCNN together with the shadow detector network in an end-to-end manner.

3.4 Flow warping and combination

Figure 3: FGwarp module. The feature map from frame t is warped using the flow field computed from our FlowCNN. The warped feature is then linearly combined with the original feature of frame t+k to form the aggregated features. Note that we resize the flow field to the size of the input feature maps.

We enforce consistency between frame feature representations by mutually exchanging their intermediate features. Since motion in video causes the spatial misalignment between the content of consecutive frames, we first need to apply an optical flow based feature warping to align the features. A flow-guided warp (FGwarp) module is defined for the task. Fig 3 illustrates this scheme for transferring features from frame to frame .

Given a pair of feature maps, and , and the refined optical flow . The feature can be warped to spatially align with the contents of . We perform this feature warping for each channel of the feature map separately. The value at a pixel , channel of the warped feature can be computed as follows:

(1)

where enumerates all spatial locations in the feature map and

denotes the bi-linear interpolation kernel.

Given the refined optical flow and the feature map , we can propagate the features from the frame to get the aligned :

(2)

The function is implemented by applying Eq.1 on different feature map levels. Then, the combined feature map can be computed as follows:

(3)

where and represent the per-channel coefficients of the linear combination that combines the two features, represents channel-wise scalar multiplication. The resulting is then passed to the following layers in the feature extractor backbone.

3.5 Training and Inference

We implement our framework using PyTorch. The feature extractor backbone is initialized via pre-trained MobileNet V2 on ImageNet

[Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei]

while other components are trained from scratch. We use stochastic gradient descent (SGD) with momentum of

and weight decay of to optimize the whole network. The training is in an end-to-end fashion where the objective function is to minimize the mean squared error (MSE) between the ground-truth and predicted shadow masks. The initial learning rate is set to , updated by poly strategy [Liu et al.(2015)Liu, Rabinovich, and Berg] with the power of . We train the model for k iterations. All input frames are resized to

. The coefficient vectors

and are initialized as and , i.e., the temporal warping is not enforced at start. is set to 1 in training, i.e., adjacent frames are used as input pairs to train our model.

For inference, we resize the inputs to . We predict shadow masks for each pair of adjacent frames. Thus, each frame, except the first and last one, will be used in two different inference passes. The final shadow mask of each frame is the average of the frame’s two output shadow masks.

4 Experiments and Results

4.1 Evaluation datasets and metrics

Benchmark dataset We use the ViSha dataset [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin] to evaluate our proposed method. The ViSha dataset has 4788 frames from 50 videos for training and 6897 frames from 70 videos for testing. All methods are trained on the training set and evaluated on the testing set for a fair comparison.

Evaluation metric We employ the commonly-used balanced error rate (BER) to evaluate shadow detection performance, which is defined as: BER , where are the total numbers of true positive, true negative, false positive, and false negative pixels respectively. Since shadow pixels are usually minority in natural images, the BER is less biased than mean pixel accuracy. In general, lower BER indicates better shadow detection performance. We also provide separate mean pixel error rates for the shadow and non-shadow classes.

4.2 Comparison with the state-of-the-art

There is only one CNN-based video shadow detection method,TVSD [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin]. Besides, we compare our method with state-of-the-art image shadow detection methods including BDRAR [Zhu et al.(2018)Zhu, Deng, Hu, Fu, Xu, Qin, and Heng], MTMT [Chen et al.(2020)Chen, Zhu, Wan, Wang, Feng, and Heng] and FSDNet [Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng]. We obtain the results of TVSD from their provided pre-trained model. We train BDRAR, MTMT, and FSDNet on the Visha dataset using their official source-codes with their default settings.

Method BER Shadow Non Shad.
BDRAR[Zhu et al.(2018)Zhu, Deng, Hu, Fu, Xu, Qin, and Heng] 13.34 20.80 5.89
MTMT[Chen et al.(2020)Chen, Zhu, Wan, Wang, Feng, and Heng] 14.55 21.15 7.94
FSDNet[Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng] 14.59 18.88 10.30
TVSD[Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin] 16.76 31.78 1.75
Ours 12.02 11.88 12.17
Table 1: Comparison with the state-of-the-art shadow detection methods on the ViSha [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin] dataset. Both Balanced Error Rate (BER) and per class error rates are shown. Best performances are printed in bold.

4.2.1 Quantitative results

Table 1 reports the performance of all methods on the ViSha dataset. Our method performs best on overall BER, we obtain a 10% error reduction and a 18% error reduction compared with BDRAR and FSDNet, respectively. TVSD has the best performance on non-shadow error. However, this is mainly because the method is insensitive to shadow areas, especially the fast changing shadows and the shadows with small areas. For the shadow areas, our method achieves the lowest error rate. The results show that leveraging temporal information on intermediate representations can boost the shadow detection performance for videos significantly.

Figure 4: Comparison of shadow predictions by selected methods. (a) shows the input images, (b) is the ground truth shadow masks and (c)-(g) are shadow masks predicted by our method and BDRAR [Zhu et al.(2018)Zhu, Deng, Hu, Fu, Xu, Qin, and Heng], MTMT [Chen et al.(2020)Chen, Zhu, Wan, Wang, Feng, and Heng], FSDNet [Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng] and TVSD [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin].
Figure 5: Video shadow detection results. Comparison between our proposed method, FSDNet and TVSD. First row shows the input frames, second row shows the ground truth shadow masks and the bottom two are the results of comparison methods.

4.2.2 Qualitative results

In Fig 4, we show some shadow detection results of our method in comparison with other methods. In the first row, we can see that all methods produce fairly accurate shadow masks with clear boundaries between shadow and non-shadow areas. Comparing with FSDNet, our method generates shadow masks with smoother boundaries without disjoint areas. Comparing with TVSD, our method is able to capture more shadow details, as shown in the last two rows. Fig 5 shows the performance of our method on three consecutive video frames of two example videos. As can be seen, TVSD is not robust in detecting moving shadows and not sensitive to small shadow areas. By contrast, our method predicts more accurate shadow masks in all cases. As we can see in the right three columns, only our method is able to detect the fast moving shadow regions.

4.2.3 Failure cases

Figure 6: Failure cases. Failure cases caused by non-shadow dark objects and limited training data diversity. (a) shows the input image, (b) shows the ground truth and (c) shows the shadow mask predicted by our method.

Some failure cases of our method are shown in Fig 6

. Many of them are caused by dark objects being incorrectly classified as shadows. Dark objects are naturally difficult for shadow detection, especially when the dark object is the majority of scene. Correctly classifying these cases requires contextual understanding of the scenes as well as sufficient training data. Note that the training set of the Visha dataset has only 50 videos. Improving the generalization of models trained on such small-sized dataset is challenging.

4.3 Application on Video Shadow Removal

Shadow masks can be further used to remove shadows to create shadow-free images. We test our predicted shadow masks on the model proposed in [Le and Samaras(2019), Le and Samaras(2020b)] which takes the original image and the shadow mask as input and produces the shadow-free version of the image. Fig 7 shows the performance of using our predicted shadow mask and using the ground truth shadow mask, results show that our video shadow detection method gets reasonable results and can be used downstream to create shadow-free videos.

Figure 7: Application on shadow removal model. Shadow-free image generation using shadow masks predicted by our method versus the ground truth. (a) shows the input image, (b) shows the ground truth shadow mask, (c) shows the shadow removal results using ground truth, (d) shows the shadow mask predicted by our method and (e) shows the results using our shadow mask.

5 Summary

Large-scale video shadow detection is still in an early stage. In this paper, we show that a basic and simple idea can be extremely effective for the task. We deployed an optical-flow based feature warping and combination scheme to enforce correspondence between representations. Highly correlated intermediate representations led to an improvement of shadow prediction accuracy and consistency in the videos. In experimental results our model was able to handle large shadow appearance changes and capture small shadow regions, and outperformed the state-of-the-art methods on the existing video dataset. We believe our method will serve as an important stepping stone for future work.

References

  • [Buades et al.(2016)Buades, Lisani, and Miladinović] Antoni Buades, Jose-Luis Lisani, and Marko Miladinović. Patch-based video denoising with optical flow estimation. IEEE Transactions on Image Processing, 25(6):2573–2586, 2016.
  • [Butler et al.(2012)Butler, Wulff, Stanley, and Black] D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black.

    A naturalistic open source movie for optical flow evaluation.

    In A. Fitzgibbon et al. (Eds.), editor, European Conf. on Computer Vision (ECCV), Part IV, LNCS 7577, pages 611–625. Springer-Verlag, October 2012.
  • [Chen et al.(2020)Chen, Zhu, Wan, Wang, Feng, and Heng] Zhihao Chen, Lei Zhu, Liang Wan, Song Wang, Wei Feng, and Pheng-Ann Heng. A multi-task mean teacher for semi-supervised shadow detection. In

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    , June 2020.
  • [Chen et al.(2021)Chen, Wan, Zhu, Shen, Fu, Liu, and Qin] Zhihao Chen, Liang Wan, Lei Zhu, Jia Shen, Huazhu Fu, Wennan Liu, and Jing Qin. Triple-cooperative video shadow detection. In CVPR, 2021.
  • [Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
  • [Ding et al.(2019)Ding, Long, Zhang, and Xiao] Bin Ding, Chengjiang Long, Ling Zhang, and Chunxia Xiao.

    Argan: Attentive recurrent generative adversarial network for shadow detection and removal.

    In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  • [Filippi and İnci Güneralp(2013)] Anthony M. Filippi and İnci Güneralp. Influence of shadow removal on image classification in riverine environments. Opt. Lett., 38(10):1676–1678, May 2013.
  • [Gadde et al.(2017)Gadde, Jampani, and Gehler] Raghudeep Gadde, Varun Jampani, and Peter V. Gehler. Semantic video cnns through representation warping. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [Hu et al.(2018)Hu, Zhu, Fu, Qin, and Heng] Xiaowei Hu, Lei Zhu, Chi-Wing Fu, Jing Qin, and Pheng-Ann Heng. Direction-aware spatial context features for shadow detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [Hu et al.(2021)Hu, Wang, Fu, Jiang, Wang, and Heng] Xiaowei Hu, Tianyu Wang, Chi-Wing Fu, Yitong Jiang, Qiong Wang, and Pheng-Ann Heng. Revisiting shadow detection: A new benchmark dataset for complex world. IEEE Transactions on Image Processing, 30:1925–1934, 2021.
  • [Ilg et al.(2017)Ilg, Mayer, Saikia, Keuper, Dosovitskiy, and Brox] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  • [Jacques et al.(2005)Jacques, Jung, and Musse] J.C.S. Jacques, C.R. Jung, and S.R. Musse. Background subtraction and shadow detection in grayscale video sequences. In XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI’05), pages 189–196, 2005.
  • [Le and Samaras(2019)] Hieu Le and Dimitris Samaras. Shadow removal via shadow image decomposition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  • [Le and Samaras(2020a)] Hieu Le and Dimitris Samaras. From shadow segmentation to shadow removal. In ECCV, 2020a.
  • [Le and Samaras(2020b)] Hieu Le and Dimitris Samaras. Physics-based shadow image decomposition for shadow removal, 2020b.
  • [Le et al.(2016)Le, Nguyen, Yu, and Samaras] Hieu Le, Vu Nguyen, Chen-Ping Yu, and D. Samaras. Geodesic distance histogram feature for video segmentation. ACCV, 2016.
  • [Le et al.(2018)Le, Vicente, Nguyen, Hoai, and Samaras] Hieu Le, Tomas F. Yago Vicente, Vu Nguyen, Minh Hoai, and Dimitris Samaras. A+d net: Training a shadow detector with adversarial shadow attenuation. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
  • [Li et al.(2021)Li, Zhao, He, Zhu, and Liu] Jiangyun Li, Yikai Zhao, Xingjian He, Xinxin Zhu, and Jing Liu. Dynamic Warping Network for Semantic Video Segmentation. Complexity, 2021:1–10, February 2021.
  • [Liu et al.(2020)Liu, Zhang, He, Liu, Wang, Tai, Luo, Wang, Li, and Huang] Liang Liu, Jiangning Zhang, Ruifei He, Yong Liu, Yabiao Wang, Ying Tai, Donghao Luo, Chengjie Wang, Jilin Li, and Feiyue Huang. Learning by analogy: Reliable supervision from transformations for unsupervised optical flow estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  • [Liu et al.(2015)Liu, Rabinovich, and Berg] Wei Liu, Andrew Rabinovich, and Alexander C. Berg. Parsenet: Looking wider to see better, 2015.
  • [Menze and Geiger(2015)] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  • [Mohanapriya and Mahesh(2017)] D. Mohanapriya and K. Mahesh.

    A video target tracking using shadow suppression and feature extraction.

    In 2017 International Conference on Information Communication and Embedded Systems (ICICES), pages 1–6, 2017.
  • [Nguyen et al.(2017)Nguyen, Yago Vicente, Zhao, Hoai, and Samaras] Vu Nguyen, Tomas F. Yago Vicente, Maozheng Zhao, Minh Hoai, and Dimitris Samaras. Shadow detection with conditional generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  • [Sandler et al.(2018)Sandler, Howard, Zhu, Zhmoginov, and Chen] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [Sanin et al.(2012)Sanin, Sanderson, and Lovell] Andres Sanin, Conrad Sanderson, and Brian C. Lovell. Shadow detection: A survey and comparative evaluation of recent methods. Pattern Recognition, 45(4):1684–1695, 2012.
  • [Saravanakumar et al.(2010)Saravanakumar, Vadivel, and Saneem Ahmed] S. Saravanakumar, A. Vadivel, and C.G Saneem Ahmed. Multiple human object tracking using background subtraction and shadow removal techniques. In 2010 International Conference on Signal and Image Processing, pages 79–84, 2010.
  • [Shi and Liu(2019)] Hang Shi and Chengjun Liu. Moving cast shadow detection in video based on new chromatic criteria and statistical modeling. In

    2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)

    , pages 196–201, 2019.
  • [Shin et al.(2005)Shin, Kim, Kang, Lee, Paik, Abidi, and Abidi] Jeongho Shin, Sangjin Kim, Sangkyu Kang, Seong-Won Lee, Joonki Paik, Besma Abidi, and Mongi Abidi. Optical flow-based real-time object tracking using non-prior training active feature model. Real-Time Imaging, 11(3):204–218, 2005. ISSN 1077-2014. Special Issue on Video Object Processing.
  • [Sun et al.(2018)Sun, Yang, Liu, and Kautz] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [Vicente et al.(2016a)Vicente, Hoai, and Samaras] Tomas F. Yago Vicente, Minh Hoai, and Dimitris Samaras. Noisy label recovery for shadow detection in unfamiliar domains. In CVPR, 2016a.
  • [Vicente et al.(2016b)Vicente, Hou, Yu, Hoai, and Samaras] Tomás F Yago Vicente, Le Hou, Chen-Ping Yu, Minh Hoai, and Dimitris Samaras. Large-scale training of shadow detectors with noisily-annotated shadow examples. In European Conference on Computer Vision, pages 816–832. Springer, 2016b.
  • [Wang et al.(2018)Wang, Li, and Yang] Jifeng Wang, Xiang Li, and Jian Yang. Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  • [Wang et al.(2020)Wang, Curless, and Seitz] Yifan Wang, Brian Curless, and S. Seitz. People as scene probes. In ECCV, 2020.
  • [Xu et al.(2019)Xu, Chen, Su, Ji, Xu, Memon, and Zhou] Weiyue Xu, Huan Chen, Qiong Su, Changying Ji, Weidi Xu, Muhammad-Sohail Memon, and Jun Zhou. Shadow detection and removal in apple image segmentation under natural light conditions using an ultrametric contour map. Biosystems Engineering, 184:142–154, 2019. ISSN 1537-5110.
  • [Yan et al.(2019)Yan, Li, Xie, Li, Wang, Chen, and Lin] Pengxiang Yan, Guanbin Li, Yuan Xie, Zhen Li, Chuan Wang, Tianshui Chen, and Liang Lin. Semi-supervised video salient object detection using pseudo-labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  • [Zheng et al.(2019)Zheng, Qiao, Cao, and Lau] Quanlong Zheng, Xiaotian Qiao, Ying Cao, and Rynson W.H. Lau. Distraction-aware shadow detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
  • [Zhong et al.(2013)Zhong, Liu, Ren, Zhang, and Ren] Sheng-hua Zhong, Yan Liu, Feifei Ren, Jinghuan Zhang, and Tongwei Ren.

    Video saliency detection via dynamic consistent spatio-temporal attention modelling.

    Proceedings of the AAAI Conference on Artificial Intelligence

    , 27(1), Jun. 2013.
  • [Zhu et al.(2018)Zhu, Deng, Hu, Fu, Xu, Qin, and Heng] Lei Zhu, Zijun Deng, Xiaowei Hu, Chi-Wing Fu, Xuemiao Xu, Jing Qin, and Pheng-Ann Heng. Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
  • [Zhu et al.(2017)Zhu, Xiong, Dai, Yuan, and Wei] Xizhou Zhu, Yuwen Xiong, Jifeng Dai, Lu Yuan, and Yichen Wei. Deep feature flow for video recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.