Meta-Learning and Self-Supervised Pretraining for Real World Image Translation

12/22/2021
by   Ileana Rugina, et al.
24

Recent advances in deep learning, in particular enabled by hardware advances and big data, have provided impressive results across a wide range of computational problems such as computer vision, natural language, or reinforcement learning. Many of these improvements are however constrained to problems with large-scale curated data-sets which require a lot of human labor to gather. Additionally, these models tend to generalize poorly under both slight distributional shifts and low-data regimes. In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains. We follow this line of work and explore spatio-temporal structure in a recently introduced image-to-image translation problem in order to: i) formulate a novel multi-task few-shot image generation benchmark and ii) explore data augmentations in contrastive pre-training for image translation downstream tasks. We present several baselines for the few-shot problem and discuss trade-offs between different approaches. Our code is available at https://github.com/irugina/meta-image-translation.

READ FULL TEXT

page 2

page 6

page 7

page 8

page 9

research
03/30/2020

Semi-supervised Learning for Few-shot Image-to-Image Translation

In the last few years, unpaired image-to-image translation has witnessed...
research
10/08/2019

Semi Few-Shot Attribute Translation

Recent studies have shown remarkable success in image-to-image translati...
research
11/14/2019

Self-Supervised Learning For Few-Shot Image Classification

Few-shot image classification aims to classify unseen classes with limit...
research
09/14/2023

Nucleus-aware Self-supervised Pretraining Using Unpaired Image-to-image Translation for Histopathology Images

Self-supervised pretraining attempts to enhance model performance by obt...
research
11/10/2019

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks

Self-supervised pre-training of transformer models has shown enormous su...
research
01/30/2023

Edge-guided Multi-domain RGB-to-TIR image Translation for Training Vision Tasks with Challenging Labels

The insufficient number of annotated thermal infrared (TIR) image datase...
research
09/01/2022

The Neural Process Family: Survey, Applications and Perspectives

The standard approaches to neural network implementation yield powerful ...

Please sign up or login with your details

Forgot password? Click here to reset