-
Demystifying Self-Supervised Learning: An Information-Theoretical Framework
Self-supervised representation learning adopts self-defined signals as s...
read it
-
Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere
Contrastive representation learning has been outstandingly successful in...
read it
-
Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models
Deep neural networks have shown significant promise in comprehending com...
read it
-
CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances
Novelty detection, i.e., identifying whether a given sample is drawn fro...
read it
-
Social NCE: Contrastive Learning of Socially-aware Motion Representations
Learning socially-aware motion representations is at the core of recent ...
read it
-
Contrastive Representation Learning: A Framework and Review
Contrastive Learning has recently received interest due to its success i...
read it
-
Debiased Contrastive Learning
A prominent technique for self-supervised representation learning has be...
read it
i-Mix: A Strategy for Regularizing Contrastive Representation Learning
Contrastive representation learning has shown to be an effective way of learning representations from unlabeled data. However, much progress has been made in vision domains relying on data augmentations carefully designed using domain knowledge. In this work, we propose i-Mix, a simple yet effective regularization strategy for improving contrastive representation learning in both vision and non-vision domains. We cast contrastive learning as training a non-parametric classifier by assigning a unique virtual class to each data in a batch. Then, data instances are mixed in both the input and virtual label spaces, providing more augmented data during training. In experiments, we demonstrate that i-Mix consistently improves the quality of self-supervised representations across domains, resulting in significant performance gains on downstream tasks. Furthermore, we confirm its regularization effect via extensive ablation studies across model and dataset sizes.
READ FULL TEXT
Comments
There are no comments yet.