Unsupervised Learning of Neurosymbolic Encoders

07/28/2021
by   Eric Zhan, et al.
9

We present a framework for the unsupervised learning of neurosymbolic encoders, i.e., encoders obtained by composing neural networks with symbolic programs from a domain-specific language. Such a framework can naturally incorporate symbolic expert knowledge into the learning process and lead to more interpretable and factorized latent representations than fully neural encoders. Also, models learned this way can have downstream impact, as many analysis workflows can benefit from having clean programmatic descriptions. We ground our learning algorithm in the variational autoencoding (VAE) framework, where we aim to learn a neurosymbolic encoder in conjunction with a standard decoder. Our algorithm integrates standard VAE-style training with modern program synthesis techniques. We evaluate our method on learning latent representations for real-world trajectory data from animal biology and sports analytics. We show that our approach offers significantly better separation than standard VAEs and leads to practical gains on downstream tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/31/2021

Consistency Regularization for Variational Auto-Encoders

Variational auto-encoders (VAEs) are a powerful approach to unsupervised...
research
07/14/2020

Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks

Variational Auto-encoders (VAEs) are deep generative latent variable mod...
research
04/07/2020

PatchVAE: Learning Local Latent Codes for Recognition

Unsupervised representation learning holds the promise of exploiting lar...
research
05/29/2021

Understanding Instance-based Interpretability of Variational Auto-Encoders

Instance-based interpretation methods have been widely studied for super...
research
08/29/2018

Voice Conversion Based on Cross-Domain Features Using Variational Auto Encoders

An effective approach to non-parallel voice conversion (VC) is to utiliz...
research
02/12/2023

Vector Quantized Wasserstein Auto-Encoder

Learning deep discrete latent presentations offers a promise of better s...
research
07/19/2023

Symmetric Equilibrium Learning of VAEs

We view variational autoencoders (VAE) as decoder-encoder pairs, which m...

Please sign up or login with your details

Forgot password? Click here to reset