Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining

02/05/2023
by   Zekun Qi, et al.
7

Mainstream 3D representation learning approaches are built upon contrastive or generative modeling pretext tasks, where great improvements in performance on various downstream tasks have been achieved. However, by investigating the methods of these two paradigms, we find that (i) contrastive models are data-hungry that suffer from a representation over-fitting issue; (ii) generative models have a data filling issue that shows inferior data scaling capacity compared to contrastive models. This motivates us to learn 3D representations by sharing the merits of both paradigms, which is non-trivial due to the pattern difference between the two paradigms. In this paper, we propose contrast with reconstruct (ReCon) that unifies these two paradigms. ReCon is trained to learn from both generative modeling teachers and cross-modal contrastive teachers through ensemble distillation, where the generative student guides the contrastive student. An encoder-decoder style ReCon-block is proposed that transfers knowledge through cross attention with stop-gradient, which avoids pretraining over-fitting and pattern difference issues. ReCon achieves a new state-of-the-art in 3D representation learning, e.g., 91.26 https://github.com/qizekun/ReCon.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2023

Prompted Contrast with Masked Motion Modeling: Towards Versatile 3D Action Representation Learning

Self-supervised learning has proved effective for skeleton-based human a...
research
07/26/2020

Contrastive Visual-Linguistic Pretraining

Several multi-modality representation learning approaches such as LXMERT...
research
05/28/2022

CyCLIP: Cyclic Contrastive Language-Image Pretraining

Recent advances in contrastive representation learning over paired image...
research
11/23/2022

How do Cross-View and Cross-Modal Alignment Affect Representations in Contrastive Learning?

Various state-of-the-art self-supervised visual representation learning ...
research
08/25/2022

MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining

This paper presents a simple yet effective framework MaskCLIP, which inc...
research
10/05/2020

A Simple Framework for Uncertainty in Contrastive Learning

Contrastive approaches to representation learning have recently shown gr...
research
06/22/2022

Prototypical Contrastive Language Image Pretraining

Contrastive Language Image Pretraining (CLIP) received widespread attent...

Please sign up or login with your details

Forgot password? Click here to reset