Infinite Class Mixup

05/17/2023
by   Thomas Mensink, et al.
0

Mixup is a widely adopted strategy for training deep networks, where additional samples are augmented by interpolating inputs and labels of training pairs. Mixup has shown to improve classification performance, network calibration, and out-of-distribution generalisation. While effective, a cornerstone of Mixup, namely that networks learn linear behaviour patterns between classes, is only indirectly enforced since the output interpolation is performed at the probability level. This paper seeks to address this limitation by mixing the classifiers directly instead of mixing the labels for each mixed pair. We propose to define the target of each augmented sample as a uniquely new classifier, whose parameters are a linear interpolation of the classifier vectors of the input pair. The space of all possible classifiers is continuous and spans all interpolations between classifier pairs. To make optimisation tractable, we propose a dual-contrastive Infinite Class Mixup loss, where we contrast the classifier of a mixed pair to both the classifiers and the predicted outputs of other mixed pairs in a batch. Infinite Class Mixup is generic in nature and applies to many variants of Mixup. Empirically, we show that it outperforms standard Mixup and variants such as RegMixup and Remix on balanced, long-tailed, and data-constrained benchmarks, highlighting its broad applicability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2022

Rebalanced Siamese Contrastive Mining for Long-Tailed Recognition

Deep neural networks perform poorly on heavily class-imbalanced datasets...
research
07/11/2023

Class Instance Balanced Learning for Long-Tailed Classification

The long-tailed image classification task remains important in the devel...
research
08/23/2023

RankMixup: Ranking-Based Mixup Training for Network Calibration

Network calibration aims to accurately estimate the level of confidences...
research
09/15/2021

Adversarial Mixing Policy for Relaxing Locally Linear Constraints in Mixup

Mixup is a recent regularizer for current deep classification networks. ...
research
10/18/2021

Intrusion-Free Graph Mixup

We present a simple and yet effective interpolation-based regularization...
research
06/20/2019

Data Interpolating Prediction: Alternative Interpretation of Mixup

Data augmentation by mixing samples, such as Mixup, has widely been used...
research
09/29/2020

Learning to Play against Any Mixture of Opponents

Intuitively, experience playing against one mixture of opponents in a gi...

Please sign up or login with your details

Forgot password? Click here to reset