Class-Level Logit Perturbation

09/13/2022
by   Mengyang Li, et al.
0

Features, logits, and labels are the three primary data when a sample passes through a deep neural network. Feature perturbation and label perturbation receive increasing attention in recent years. They have been proven to be useful in various deep learning approaches. For example, (adversarial) feature perturbation can improve the robustness or even generalization capability of learned models. However, limited studies have explicitly explored for the perturbation of logit vectors. This work discusses several existing methods related to class-level logit perturbation. A unified viewpoint between positive/negative data augmentation and loss variations incurred by logit perturbation is established. A theoretical analysis is provided to illuminate why class-level logit perturbation is useful. Accordingly, new methodologies are proposed to explicitly learn to perturb logits for both single-label and multi-label classification tasks. Extensive experiments on benchmark image classification data sets and their long-tail versions indicated the competitive performance of our learning method. As it only perturbs on logit, it can be used as a plug-in to fuse with any existing classification algorithms. All the codes are available at https://github.com/limengyang1992/lpl.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/03/2022

MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples

Multi-label classification, which predicts a set of labels for an input,...
research
01/30/2023

Adversarial Style Augmentation for Domain Generalization

It is well-known that the performance of well-trained deep neural networ...
research
09/03/2022

Label Structure Preserving Contrastive Embedding for Multi-Label Learning with Missing Labels

Contrastive learning (CL) has shown impressive advances in image represe...
research
03/07/2023

GaussianMLR: Learning Implicit Class Significance via Calibrated Multi-Label Ranking

Existing multi-label frameworks only exploit the information deduced fro...
research
11/27/2017

Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation

This paper proposes a new algorithm for controlling classification resul...
research
07/16/2020

Learning perturbation sets for robust machine learning

Although much progress has been made towards robust deep learning, a sig...
research
06/19/2018

Maximally Invariant Data Perturbation as Explanation

While several feature scoring methods are proposed to explain the output...

Please sign up or login with your details

Forgot password? Click here to reset