Better Latent Spaces for Better Autoencoders

04/16/2021
by   Barry M. Dillon, et al.
0

Autoencoders as tools behind anomaly searches at the LHC have the structural problem that they only work in one direction, extracting jets with higher complexity but not the other way around. To address this, we derive classifiers from the latent space of (variational) autoencoders, specifically in Gaussian mixture and Dirichlet latent spaces. In particular, the Dirichlet setup solves the problem and improves both the performance and the interpretability of the networks.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 5

page 18

06/19/2019

Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders

In this paper, we learn disentangled representations of timbre and pitch...
11/06/2018

Sets of autoencoders with shared latent spaces

Autoencoders receive latent models of input data. It was shown in recent...
12/18/2018

Sparsity in Variational Autoencoders

Working in high-dimensional latent spaces, the internal encoding of data...
06/29/2021

A Mechanism for Producing Aligned Latent Spaces with Autoencoders

Aligned latent spaces, where meaningful semantic shifts in the input spa...
10/29/2020

Distance Invariant Sparse Autoencoder for Wireless Signal Strength Mapping

Wireless signal strength based localization can enable robust localizati...
10/12/2020

Spacetime Autoencoders Using Local Causal States

Local causal states are latent representations that capture organized pa...
06/18/2020

Constraining Variational Inference with Geometric Jensen-Shannon Divergence

We examine the problem of controlling divergences for latent space regul...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.