Adversarial Mixup Resynthesizers

03/07/2019
by   Christopher Beckham, et al.
12

In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.

READ FULL TEXT

page 5

page 6

page 7

page 11

page 12

page 13

research
11/19/2016

Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks

We introduce a simple semi-supervised learning approach for images based...
research
05/20/2022

Swapping Semantic Contents for Mixing Images

Deep architecture have proven capable of solving many tasks provided a s...
research
10/20/2022

i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable?

Masked image modeling (MIM) has been recognized as a strong and popular ...
research
02/08/2017

An Adversarial Regularisation for Semi-Supervised Training of Structured Output Neural Networks

We propose a method for semi-supervised training of structured-output ne...
research
02/01/2021

Semi-Supervised Disentanglement of Class-Related and Class-Independent Factors in VAE

In recent years, extending variational autoencoder's framework to learn ...
research
01/09/2020

Semi-supervised Learning via Conditional Rotation Angle Estimation

Self-supervised learning (SlfSL), aiming at learning feature representat...

Please sign up or login with your details

Forgot password? Click here to reset