Any Deep ReLU Network is Shallow

06/20/2023
by   Mattia Jacopo Villani, et al.
1

We constructively prove that every deep ReLU network can be rewritten as a functionally identical three-layer network with weights valued in the extended reals. Based on this proof, we provide an algorithm that, given a deep ReLU network, finds the explicit weights of the corresponding shallow network. The resulting shallow network is transparent and used to generate explanations of the model s behaviour.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/10/2018

Random ReLU Features: Universality, Approximation, and Composition

We propose random ReLU features models in this work. Its motivation is r...
research
12/27/2021

Depth and Feature Learning are Provably Beneficial for Neural Network Discriminators

We construct pairs of distributions μ_d, ν_d on ℝ^d such that the quanti...
research
10/02/2019

Identifying Weights and Architectures of Unknown ReLU Networks

The output of a neural network depends on its parameters in a highly non...
research
10/15/2019

Neural tangent kernels, transportation mappings, and universal approximation

This paper establishes rates of universal approximation for the shallow ...
research
05/30/2019

Function approximation by deep networks

We show that deep networks are better than shallow networks at approxima...
research
03/23/2021

Initializing ReLU networks in an expressive subspace of weights

Using a mean-field theory of signal propagation, we analyze the evolutio...

Please sign up or login with your details

Forgot password? Click here to reset