Spline Representation and Redundancies of One-Dimensional ReLU Neural Network Models

07/29/2022
by   Gerlind Plonka, et al.
0

We analyze the structure of a one-dimensional deep ReLU neural network (ReLU DNN) in comparison to the model of continuous piecewise linear (CPL) spline functions with arbitrary knots. In particular, we give a recursive algorithm to transfer the parameter set determining the ReLU DNN into the parameter set of a CPL spline function. Using this representation, we show that after removing the well-known parameter redundancies of the ReLU DNN, which are caused by the positive scaling property, all remaining parameters are independent. Moreover, we show that the ReLU DNN with one, two or three hidden layers can represent CPL spline functions with K arbitrarily prescribed knots (breakpoints), where K is the number of real parameters determining the normalized ReLU DNN (up to the output layer parameters). Our findings are useful to fix a priori conditions on the ReLU DNN to achieve an output with prescribed breakpoints and function values.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2021

ReLU Deep Neural Networks and linear Finite Elements

In this paper, we investigate the relationship between deep neural netwo...
research
06/09/2020

In Proximity of ReLU DNN, PWA Function, and Explicit MPC

Rectifier (ReLU) deep neural networks (DNN) and their connection with pi...
research
02/27/2021

Spline parameterization of neural network controls for deep learning

Based on the continuous interpretation of deep learning cast as an optim...
research
05/13/2020

The effect of Target Normalization and Momentum on Dying ReLU

Optimizing parameters with momentum, normalizing data values, and using ...
research
03/10/2022

Stable Parametrization of Continuous and Piecewise-Linear Functions

Rectified-linear-unit (ReLU) neural networks, which play a prominent rol...
research
12/24/2021

Parameter identifiability of a deep feedforward ReLU neural network

The possibility for one to recover the parameters-weights and biases-of ...
research
10/24/2021

Exploring Gradient Flow Based Saliency for DNN Model Compression

Model pruning aims to reduce the deep neural network (DNN) model size or...

Please sign up or login with your details

Forgot password? Click here to reset