Supervised Contrastive Prototype Learning: Augmentation Free Robust Neural Network

11/26/2022
by   Iordanis Fostiropoulos, et al.
0

Transformations in the input space of Deep Neural Networks (DNN) lead to unintended changes in the feature space. Almost perceptually identical inputs, such as adversarial examples, can have significantly distant feature representations. On the contrary, Out-of-Distribution (OOD) samples can have highly similar feature representations to training set samples. Our theoretical analysis for DNNs trained with a categorical classification head suggests that the inflexible logit space restricted by the classification problem size is one of the root causes for the lack of robustness. Our second observation is that DNNs over-fit to the training augmentation technique and do not learn nuance invariant representations. Inspired by the recent success of prototypical and contrastive learning frameworks for both improving robustness and learning nuance invariant representations, we propose a training framework, Supervised Contrastive Prototype Learning (SCPL). We use N-pair contrastive loss with prototypes of the same and opposite classes and replace a categorical classification head with a Prototype Classification Head (PCH). Our approach is sample efficient, does not require sample mining, can be implemented on any existing DNN without modification to their architecture, and combined with other training augmentation techniques. We empirically evaluate the clean robustness of our method on out-of-distribution and adversarial samples. Our framework outperforms other state-of-the-art contrastive and prototype learning approaches in robustness.

READ FULL TEXT

page 1

page 2

page 13

research
03/16/2022

Robustness through Cognitive Dissociation Mitigation in Contrastive Adversarial Training

In this paper, we introduce a novel neural network training framework th...
research
09/14/2022

PointACL:Adversarial Contrastive Learning for Robust Point Clouds Representation under Adversarial Attack

Despite recent success of self-supervised based contrastive learning mod...
research
06/05/2020

Robust Face Verification via Disentangled Representations

We introduce a robust algorithm for face verification, i.e., deciding wh...
research
12/22/2022

Understanding and Improving the Role of Projection Head in Self-Supervised Learning

Self-supervised learning (SSL) aims to produce useful feature representa...
research
05/09/2022

Model-Contrastive Learning for Backdoor Defense

Along with the popularity of Artificial Intelligence (AI) techniques, an...
research
05/19/2022

Label-invariant Augmentation for Semi-Supervised Graph Classification

Recently, contrastiveness-based augmentation surges a new climax in the ...
research
10/12/2022

Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork

Deep neural networks (DNNs) are vulnerable to backdoor attacks. Previous...

Please sign up or login with your details

Forgot password? Click here to reset