DeepAI AI Chat
Log In Sign Up

PADDLES: Phase-Amplitude Spectrum Disentangled Early Stopping for Learning with Noisy Labels

12/07/2022
by   Huaxi Huang, et al.
0

Convolutional Neural Networks (CNNs) have demonstrated superiority in learning patterns, but are sensitive to label noises and may overfit noisy labels during training. The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels. Motivated by biological findings that the amplitude spectrum (AS) and phase spectrum (PS) in the frequency domain play different roles in the animal's vision system, we observe that PS, which captures more semantic information, can increase the robustness of DNNs to label noise, more so than AS can. We thus propose early stops at different times for AS and PS by disentangling the features of some layer(s) into AS and PS using Discrete Fourier Transform (DFT) during training. Our proposed Phase-AmplituDe DisentangLed Early Stopping (PADDLES) method is shown to be effective on both synthetic and real-world label-noise datasets. PADDLES outperforms other early stopping methods and obtains state-of-the-art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/19/2019

Prestopping: How Does Early Stopping Help Generalization against Label Noise?

Noisy labels are very common in real-world training data, which lead to ...
06/30/2021

Understanding and Improving Early Stopping for Learning with Noisy Labels

The memorization effect of deep neural network (DNN) plays a pivotal rol...
02/16/2020

Learning Not to Learn in the Presence of Noisy Labels

Learning in the presence of label noise is a challenging yet important t...
08/19/2021

Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain

Recently, the generalization behavior of Convolutional Neural Networks (...
12/02/2022

PASTA: Proportional Amplitude Spectrum Training Augmentation for Syn-to-Real Domain Generalization

Synthetic data offers the promise of cheap and bountiful training data f...
06/30/2020

Early-Learning Regularization Prevents Memorization of Noisy Labels

We propose a novel framework to perform classification via deep learning...
03/15/2023

The Benefits of Mixup for Feature Learning

Mixup, a simple data augmentation method that randomly mixes two data po...