SILT: Shadow-aware Iterative Label Tuning for Learning to Detect Shadows from Noisy Labels

08/23/2023
by   Han Yang, et al.
0

Existing shadow detection datasets often contain missing or mislabeled shadows, which can hinder the performance of deep learning models trained directly on such data. To address this issue, we propose SILT, the Shadow-aware Iterative Label Tuning framework, which explicitly considers noise in shadow labels and trains the deep model in a self-training manner. Specifically, we incorporate strong data augmentations with shadow counterfeiting to help the network better recognize non-shadow regions and alleviate overfitting. We also devise a simple yet effective label tuning strategy with global-local fusion and shadow-aware filtering to encourage the network to make significant refinements on the noisy labels. We evaluate the performance of SILT by relabeling the test set of the SBU dataset and conducting various experiments. Our results show that even a simple U-Net trained with SILT can outperform all state-of-the-art methods by a large margin. When trained on SBU / UCF / ISTD, our network can successfully reduce the Balanced Error Rate by 25.2 21.3

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 7

page 8

research
06/01/2019

Robust Learning Under Label Noise With Iterative Noise-Filtering

We consider the problem of training a model under the presence of label ...
research
07/11/2023

Unleashing the Potential of Regularization Strategies in Learning with Noisy Labels

In recent years, research on learning with noisy labels has focused on d...
research
10/04/2019

SELF: Learning to Filter Noisy Labels with Self-Ensembling

Deep neural networks (DNNs) have been shown to over-fit a dataset when b...
research
09/20/2019

A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels

Recently deep neural networks have shown their capacity to memorize trai...
research
01/31/2021

Co-Seg: An Image Segmentation Framework Against Label Corruption

Supervised deep learning performance is heavily tied to the availability...
research
10/13/2021

Simple Attention Module based Speaker Verification with Iterative noisy label detection

Recently, the attention mechanism such as squeeze-and-excitation module ...
research
03/21/2023

Fighting over-fitting with quantization for learning deep neural networks on noisy labels

The rising performance of deep neural networks is often empirically attr...

Please sign up or login with your details

Forgot password? Click here to reset