Hybrid Models with Deep and Invertible Features

02/07/2019
by   Eric Nalisnick, et al.
12

We propose a neural hybrid model consisting of a linear model defined on a set of features computed by a deep, invertible transformation (i.e. a normalizing flow). An attractive property of our model is that both p(features), the features' density, and p(targets | features), the predictive distribution, can be computed exactly in a single feed-forward pass. We show that our hybrid model, despite the invertibility constraints, achieves similar accuracy to purely predictive models. Yet the generative component remains a good model of the input features despite the hybrid optimization objective. This offers additional capabilities such as detection of out-of-distribution inputs and enabling semi-supervised learning. The availability of the exact joint density p(targets, features) also allows us to compute many quantities readily, making our hybrid model a useful building block for downstream applications of probabilistic deep learning.

READ FULL TEXT

page 6

page 7

page 8

research
08/05/2020

Working Memory for Online Memory Binding Tasks: A Hybrid Model

Working Memory is the brain module that holds and manipulates informatio...
research
10/10/2017

Safe Semi-Supervised Learning of Sum-Product Networks

In several domains obtaining class annotations is expensive while at the...
research
06/02/2020

A generalized linear joint trained framework for semi-supervised learning of sparse features

The elastic-net is among the most widely used types of regularization al...
research
04/09/2019

Block Neural Autoregressive Flow

Normalising flows (NFS) map two density functions via a differentiable b...
research
05/18/2020

Hybrid-DNNs: Hybrid Deep Neural Networks for Mixed Inputs

Rapid development of big data and high-performance computing have encour...

Please sign up or login with your details

Forgot password? Click here to reset