Truncate-Split-Contrast: A Framework for Learning from Mislabeled Videos

12/27/2022
by   Zixiao Wang, et al.
0

Learning with noisy label (LNL) is a classic problem that has been extensively studied for image tasks, but much less for video in the literature. A straightforward migration from images to videos without considering the properties of videos, such as computational cost and redundant information, is not a sound choice. In this paper, we propose two new strategies for video analysis with noisy labels: 1) A lightweight channel selection method dubbed as Channel Truncation for feature-based label noise detection. This method selects the most discriminative channels to split clean and noisy instances in each category; 2) A novel contrastive strategy dubbed as Noise Contrastive Learning, which constructs the relationship between clean and noisy instances to regularize model training. Experiments on three well-known benchmark datasets for video classification show that our proposed truNcatE-split-contrAsT (NEAT) significantly outperforms the existing baselines. By reducing the dimension to 10% of it, our method achieves over 0.4 noise detection F1-score and 5% classification accuracy improvement on Mini-Kinetics dataset under severe noise (symmetric-80%). Thanks to Noise Contrastive Learning, the average classification accuracy improvement on Mini-Kinetics and Sth-Sth-V1 is over 1.6%.

READ FULL TEXT

page 12

page 13

page 14

page 15

research
08/14/2023

Channel-Wise Contrastive Learning for Learning with Noisy Labels

In real-world datasets, noisy labels are pervasive. The challenge of lea...
research
02/01/2021

Learning to Combat Noisy Labels via Classification Margins

A deep neural network trained on noisy labels is known to quickly lose i...
research
06/14/2021

Over-Fit: Noisy-Label Detection based on the Overfitted Model Property

Due to the increasing need to handle the noisy label problem in a massiv...
research
03/24/2021

Jo-SRC: A Contrastive Approach for Combating Noisy Labels

Due to the memorization effect in Deep Neural Networks (DNNs), training ...
research
01/29/2022

Investigating Why Contrastive Learning Benefits Robustness Against Label Noise

Self-supervised contrastive learning has recently been shown to be very ...
research
10/01/2019

IEG: Robust Neural Network Training to Tackle Severe Label Noise

Collecting large-scale data with clean labels for supervised training of...
research
06/28/2019

ProtoNet: Learning from Web Data with Memory

Learning from web data has attracted lots of research interest in recent...

Please sign up or login with your details

Forgot password? Click here to reset