Regularizing Neural Network Training via Identity-wise Discriminative Feature Suppression

09/29/2022
by   Avraham Chapman, et al.
0

It is well-known that a deep neural network has a strong fitting capability and can easily achieve a low training error even with randomly assigned class labels. When the number of training samples is small, or the class labels are noisy, networks tend to memorize patterns specific to individual instances to minimize the training error. This leads to the issue of overfitting and poor generalisation performance. This paper explores a remedy by suppressing the network's tendency to rely on instance-specific patterns for empirical error minimisation. The proposed method is based on an adversarial training framework. It suppresses features that can be utilized to identify individual instances among samples within each class. This leads to classifiers only using features that are both discriminative across classes and common within each class. We call our method Adversarial Suppression of Identity Features (ASIF), and demonstrate the usefulness of this technique in boosting generalisation accuracy when faced with small datasets or noisy labels. Our source code is available.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/06/2017

Neural Networks Regularization Through Class-wise Invariant Representation Learning

Training deep neural networks is known to require a large number of trai...
research
05/02/2022

Enhancing Adversarial Training with Feature Separability

Deep Neural Network (DNN) are vulnerable to adversarial attacks. As a co...
research
02/01/2021

Learning to Combat Noisy Labels via Classification Margins

A deep neural network trained on noisy labels is known to quickly lose i...
research
06/04/2023

Active Inference-Based Optimization of Discriminative Neural Network Classifiers

Commonly used objective functions (losses) for a supervised optimization...
research
06/03/2021

Exploring Memorization in Adversarial Training

It is well known that deep learning models have a propensity for fitting...
research
03/25/2021

Deepfake Forensics via An Adversarial Game

With the progress in AI-based facial forgery (i.e., deepfake), people ar...
research
02/04/2019

Deep One-Class Classification Using Data Splitting

This paper introduces a generic method which enables to use conventional...

Please sign up or login with your details

Forgot password? Click here to reset