Neural Approximate Sufficient Statistics for Implicit Models

by   Yanzhi Chen, et al.

We consider the fundamental problem of how to automatically construct summary statistics for implicit generative models where the evaluation of likelihood function is intractable but sampling / simulating data from the model is possible. The idea is to frame the task of constructing sufficient statistics as learning mutual information maximizing representation of the data. This representation is computed by a deep neural network trained by a joint statistic-posterior learning strategy. We apply our approach to both traditional approximate Bayesian computation (ABC) and recent neural likelihood approaches, boosting their performance on a range of tasks.


page 1

page 2

page 3

page 4


Learning Summary Statistic for Approximate Bayesian Computation via Deep Neural Network

Approximate Bayesian Computation (ABC) methods are used to approximate p...

The Cosmic Graph: Optimal Information Extraction from Large-Scale Structure using Catalogues

We present an implicit likelihood approach to quantifying cosmological i...

On Model Selection with Summary Statistics

Recently, many authors have cast doubts on the validity of ABC model cho...

Towards a Neural Statistician

An efficient learner is one who reuses what they already know to tackle ...

On predictive inference for intractable models via approximate Bayesian computation

Approximate Bayesian computation (ABC) is commonly used for parameter es...

Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation

We present a novel family of deep neural architectures, named partially ...

Using Deep Neural Network Approximate Bayesian Network

We present a new method to approximate posterior probabilities of Bayesi...