Hidden symmetries of ReLU networks

06/09/2023
by   J. Elisenda Grigsby, et al.
0

The parameter space for any fixed architecture of feedforward ReLU neural networks serves as a proxy during training for the associated class of functions - but how faithful is this representation? It is known that many different parameter settings can determine the same function. Moreover, the degree of this redundancy is inhomogeneous: for some networks, the only symmetries are permutation of neurons in a layer and positive scaling of parameters at a neuron, while other networks admit additional hidden symmetries. In this work, we prove that, for any network architecture where no layer is narrower than the input, there exist parameter settings with no hidden symmetries. We also describe a number of mechanisms through which hidden symmetries can arise, and empirically approximate the functional dimension of different network architectures at initialization. These experiments indicate that the probability that a network has no hidden symmetries decreases towards 0 as depth increases, while increasing towards 1 as width and input dimension increase.

READ FULL TEXT
research
09/08/2022

Functional dimension of feedforward ReLU neural networks

It is well-known that the parameterized family of functions representabl...
research
04/07/2018

Continuously Constructive Deep Neural Networks

Traditionally, deep learning algorithms update the network weights where...
research
10/17/2018

Finite sample expressive power of small-width ReLU networks

We study universal finite sample expressivity of neural networks, define...
research
06/15/2020

Feature Space Saturation during Training

We propose layer saturation - a simple, online-computable method for ana...
research
02/27/2023

Permutation Equivariant Neural Functionals

This work studies the design of neural networks that can process the wei...
research
06/28/2018

ResNet with one-neuron hidden layers is a Universal Approximator

We demonstrate that a very deep ResNet with stacked modules with one neu...
research
09/03/2020

Error estimate for a universal function approximator of ReLU network with a local connection

Neural networks have shown high successful performance in a wide range o...

Please sign up or login with your details

Forgot password? Click here to reset