A Perceived Environment Design using a Multi-Modal Variational Autoencoder for learning Active-Sensing

11/01/2019
by   Timo Korthals, et al.
0

This contribution comprises the interplay between a multi-modal variational autoencoder and an environment to a perceived environment, on which an agent can act. Furthermore, we conclude our work with a comparison to curiosity-driven learning.

READ FULL TEXT

page 1

page 2

research
02/07/2022

Multi-modal data generation with a deep metric variational autoencoder

We present a deep metric variational autoencoder for multi-modal data ge...
research
03/18/2019

M^2VAE - Derivation of a Multi-Modal Variational Autoencoder Objective from the Marginal Joint Log-Likelihood

This work gives an in-depth derivation of the trainable evidence lower b...
research
03/14/2020

Perception of prosodic variation for speech synthesis using an unsupervised discrete representation of F0

In English, prosody adds a broad range of information to segment sequenc...
research
06/28/2021

Dizygotic Conditional Variational AutoEncoder for Multi-Modal and Partial Modality Absent Few-Shot Learning

Data augmentation is a powerful technique for improving the performance ...
research
03/25/2019

Learning a Multi-Modal Policy via Imitating Demonstrations with Mixed Behaviors

We propose a novel approach to train a multi-modal policy from mixed dem...
research
02/08/2021

DEFT: Distilling Entangled Factors

Disentanglement is a highly desirable property of representation due to ...
research
12/20/2018

Generating lyrics with variational autoencoder and multi-modal artist embeddings

We present a system for generating song lyrics lines conditioned on the ...

Please sign up or login with your details

Forgot password? Click here to reset