Better Latent Spaces for Better Autoencoders

04/16/2021
by   Barry M. Dillon, et al.
0

Autoencoders as tools behind anomaly searches at the LHC have the structural problem that they only work in one direction, extracting jets with higher complexity but not the other way around. To address this, we derive classifiers from the latent space of (variational) autoencoders, specifically in Gaussian mixture and Dirichlet latent spaces. In particular, the Dirichlet setup solves the problem and improves both the performance and the interpretability of the networks.

READ FULL TEXT

page 5

page 18

research
06/19/2019

Learning Disentangled Representations of Timbre and Pitch for Musical Instrument Sounds Using Gaussian Mixture Variational Autoencoders

In this paper, we learn disentangled representations of timbre and pitch...
research
06/03/2020

Open-Set Recognition with Gaussian Mixture Variational Autoencoders

In inference, open-set classification is to either classify a sample int...
research
11/06/2018

Sets of autoencoders with shared latent spaces

Autoencoders receive latent models of input data. It was shown in recent...
research
06/29/2021

A Mechanism for Producing Aligned Latent Spaces with Autoencoders

Aligned latent spaces, where meaningful semantic shifts in the input spa...
research
12/18/2018

Sparsity in Variational Autoencoders

Working in high-dimensional latent spaces, the internal encoding of data...
research
10/29/2020

Distance Invariant Sparse Autoencoder for Wireless Signal Strength Mapping

Wireless signal strength based localization can enable robust localizati...
research
10/12/2020

Spacetime Autoencoders Using Local Causal States

Local causal states are latent representations that capture organized pa...

Please sign up or login with your details

Forgot password? Click here to reset