Stabilizing Invertible Neural Networks Using Mixture Models

09/07/2020
by   Paul Hagemann, et al.
0

In this paper, we analyze the properties of invertible neural networks, which provide a way of solving inverse problems. Our main focus lies on investigating and controlling the Lipschitz constants of the corresponding inverse networks. Without such an control, numerical simulations are prone to errors and not much is gained against traditional approaches. Fortunately, our analysis indicates that changing the latent distribution from a standard normal one to a Gaussian mixture model resolves the issue of exploding Lipschitz constants. Indeed, numerical simulations confirm that this modification leads to significantly improved sampling quality in multimodal applications.

READ FULL TEXT
research
12/19/2020

Robust mixture regression with Exponential Power distribution

Assuming an exponential power distribution is one way to deal with outli...
research
06/15/2010

Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity

A general framework for solving image inverse problems is introduced in ...
research
05/29/2022

Continuous Generative Neural Networks

In this work, we present and study Continuous Generative Neural Networks...
research
01/20/2019

Fitting A Mixture Distribution to Data: Tutorial

This paper is a step-by-step tutorial for fitting a mixture distribution...
research
03/28/2023

GAS: A Gaussian Mixture Distribution-Based Adaptive Sampling Method for PINNs

With recent study of the deep learning in scientific computation, the PI...
research
04/02/2019

BCMA-ES II: revisiting Bayesian CMA-ES

This paper revisits the Bayesian CMA-ES and provides updates for normal ...

Please sign up or login with your details

Forgot password? Click here to reset