A Complete Characterisation of ReLU-Invariant Distributions

12/13/2021
by   Jan Macdonald, et al.
0

We give a complete characterisation of families of probability distributions that are invariant under the action of ReLU neural network layers. The need for such families arises during the training of Bayesian networks or the analysis of trained neural networks, e.g., in the context of uncertainty quantification (UQ) or explainable artificial intelligence (XAI). We prove that no invariant parametrised family of distributions can exist unless at least one of the following three restrictions holds: First, the network layers have a width of one, which is unreasonable for practical neural networks. Second, the probability measures in the family have finite support, which basically amounts to sampling distributions. Third, the parametrisation of the family is not locally Lipschitz continuous, which excludes all computationally feasible families. Finally, we show that these restrictions are individually necessary. For each of the three cases we can construct an invariant family exploiting exactly one of the restrictions but not the other two.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/04/2018

A method to construct exponential families by representation theory

In this paper, we give a method to construct "good" exponential families...
research
11/04/2016

Understanding Deep Neural Networks with Rectified Linear Units

In this paper we investigate the family of functions representable by de...
research
03/28/2023

Bayesian Free Energy of Deep ReLU Neural Network in Overparametrized Cases

In many research fields in artificial intelligence, it has been shown th...
research
05/22/2023

Squared Neural Families: A New Class of Tractable Density Models

Flexible models for probability distributions are an essential ingredien...
research
06/01/2020

You say Normalizing Flows I see Bayesian Networks

Normalizing flows have emerged as an important family of deep neural net...
research
12/06/2018

Singular Values for ReLU Layers

Despite their prevalence in neural networks we still lack a thorough the...

Please sign up or login with your details

Forgot password? Click here to reset