Pretrained Encoders are All You Need

06/09/2021
by   Mina Khan, et al.
0

Data-efficiency and generalization are key challenges in deep learning and deep reinforcement learning as many models are trained on large-scale, domain-specific, and expensive-to-label datasets. Self-supervised models trained on large-scale uncurated datasets have shown successful transfer to diverse settings. We investigate using pretrained image representations and spatio-temporal attention for state representation learning in Atari. We also explore fine-tuning pretrained representations with self-supervised techniques, i.e., contrastive predictive coding, spatio-temporal contrastive learning, and augmentations. Our results show that pretrained representations are at par with state-of-the-art self-supervised methods trained on domain-specific data. Pretrained representations, thus, yield data and compute-efficient state representations. https://github.com/PAL-ML/PEARL_v1

READ FULL TEXT
research
09/10/2019

Video Representation Learning by Dense Predictive Coding

The objective of this paper is self-supervised learning of spatio-tempor...
research
08/26/2021

Spatio-Temporal Graph Contrastive Learning

Deep learning models are modern tools for spatio-temporal graph (STG) fo...
research
10/03/2019

An empirical study of pretrained representations for few-shot classification

Recent algorithms with state-of-the-art few-shot classification results ...
research
08/04/2022

Standardizing and Centralizing Datasets to Enable Efficient Training of Agricultural Deep Learning Models

In recent years, deep learning models have become the standard for agric...
research
06/02/2021

Personalizing Pre-trained Models

Self-supervised or weakly supervised models trained on large-scale datas...
research
08/29/2023

A General-Purpose Self-Supervised Model for Computational Pathology

Tissue phenotyping is a fundamental computational pathology (CPath) task...

Please sign up or login with your details

Forgot password? Click here to reset