Contrastive Learning from Demonstrations

01/30/2022
by   André Correia, et al.
0

This paper presents a framework for learning visual representations from unlabeled video demonstrations captured from multiple viewpoints. We show that these representations are applicable for imitating several robotic tasks, including pick and place. We optimize a recently proposed self-supervised learning algorithm by applying contrastive learning to enhance task-relevant information while suppressing irrelevant information in the feature embeddings. We validate the proposed method on the publicly available Multi-View Pouring and a custom Pick and Place data sets and compare it with the TCN triplet baseline. We evaluate the learned representations using three metrics: viewpoint alignment, stage classification and reinforcement learning, and in all cases the results improve when compared to state-of-the-art approaches, with the added benefit of reduced number of training iterations.

READ FULL TEXT

page 2

page 5

research
08/24/2020

Contrastive learning, multi-view redundancy, and linear models

Self-supervised learning is an empirically successful approach to unsupe...
research
11/09/2022

miCSE: Mutual Information Contrastive Learning for Low-shot Sentence Embeddings

This paper presents miCSE, a mutual information-based Contrastive learni...
research
10/05/2022

CCC-wav2vec 2.0: Clustering aided Cross Contrastive Self-supervised learning of speech representations

While Self-Supervised Learning has helped reap the benefit of the scale ...
research
04/27/2021

Contrastive Spatial Reasoning on Multi-View Line Drawings

Spatial reasoning on multi-view line drawings by state-of-the-art superv...
research
06/08/2023

Factorized Contrastive Learning: Going Beyond Multi-view Redundancy

In a wide range of multimodal tasks, contrastive learning has become a p...
research
03/17/2023

On the Effects of Self-supervision and Contrastive Alignment in Deep Multi-view Clustering

Self-supervised learning is a central component in recent approaches to ...
research
08/02/2018

Learning Actionable Representations from Visual Observations

In this work we explore a new approach for robots to teach themselves ab...

Please sign up or login with your details

Forgot password? Click here to reset