Verification of deep probabilistic models

12/06/2018
by   Krishnamurthy Dvijotham, et al.
0

Probabilistic models are a critical part of the modern deep learning toolbox - ranging from generative models (VAEs, GANs), sequence to sequence models used in machine translation and speech processing to models over functional spaces (conditional neural processes, neural processes). Given the size and complexity of these models, safely deploying them in applications requires the development of tools to analyze their behavior rigorously and provide some guarantees that these models are consistent with a list of desirable properties or specifications. For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model. A functional regression model that is learning a distribution over monotonic functions should predict a larger value at a larger input. Verification of these properties requires a new framework that goes beyond notions of verification studied in deterministic feedforward networks, since requiring worst-case guarantees in probabilistic models is likely to produce conservative or vacuous results. We propose a novel formulation of verification for deep probabilistic models that take in conditioning inputs and sample latent variables in the course of producing an output: We require that the output of the model satisfies a linear constraint with high probability over the sampling of latent variables and for every choice of conditioning input to the model. We show that rigorous lower bounds on the probability that the constraint is satisfied can be obtained efficiently. Experiments with neural processes show that several properties of interest while modeling functional spaces can be modeled within this framework (monotonicity, convexity) and verified efficiently using our algorithms

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/26/2019

Knowing When to Stop: Evaluation and Verification of Conformity to Output-size Specifications

Models such as Sequence-to-Sequence and Image-to-Sequence are widely use...
research
10/07/2020

Conditional Generative Modeling via Learning the Latent Space

Although deep learning has achieved appealing results on several machine...
research
10/09/2018

A Tale of Three Probabilistic Families: Discriminative, Descriptive and Generative Models

The pattern theory of Grenander is a mathematical framework where the pa...
research
12/18/2018

PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach

With deep neural networks providing state-of-the-art machine learning mo...
research
05/28/2018

A Stochastic Decoder for Neural Machine Translation

The process of translation is ambiguous, in that there are typically man...
research
03/07/2020

A Safety Framework for Critical Systems Utilising Deep Neural Networks

Increasingly sophisticated mathematical modelling processes from Machine...
research
10/05/2018

Towards a correct and efficient implementation of simulation and verification tools for probabilistic ntcc

We extended our simulation tool Ntccrt for probabilistic ntcc (pntcc) mo...

Please sign up or login with your details

Forgot password? Click here to reset