miCSE: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings

11/09/2022
by   Tassilo Klein, et al.
1

This paper presents miCSE, a mutual information-based Contrastive learning framework that significantly advances the state-of-the-art in few-shot sentence embedding. The proposed approach imposes alignment between the attention pattern of different views during contrastive learning. Learning sentence embeddings with miCSE entails enforcing the syntactic consistency across augmented views for every single sentence, making contrastive self-supervised learning more sample efficient. As a result, the proposed approach shows strong performance in the few-shot learning domain. While it achieves superior results compared to state-of-the-art methods on multiple benchmarks in few-shot learning, it is comparable in the full-shot scenario. The proposed approach is conceptually simple, easy to implement and optimize, yet empirically powerful. This study opens up avenues for efficient self-supervised learning methods that are more robust than current contrastive methods for sentence embedding.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/15/2022

SCD: Self-Contrastive Decorrelation for Sentence Embeddings

In this paper, we propose Self-Contrastive Decorrelation (SCD), a self-s...
research
04/26/2021

Mutual Contrastive Learning for Visual Representation Learning

We present a collaborative learning method called Mutual Contrastive Lea...
research
10/21/2021

CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP

Contrastive learning with the InfoNCE objective is exceptionally success...
research
03/02/2022

Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images

Recent methods for reinforcement learning from images use auxiliary task...
research
01/30/2022

Contrastive Learning from Demonstrations

This paper presents a framework for learning visual representations from...
research
09/30/2022

Contrastive Graph Few-Shot Learning

Prevailing deep graph learning models often suffer from label sparsity i...
research
08/07/2023

Feature-Suppressed Contrast for Self-Supervised Food Pre-training

Most previous approaches for analyzing food images have relied on extens...

Please sign up or login with your details

Forgot password? Click here to reset