Linear discriminant initialization for feed-forward neural networks

07/24/2020
by   Marissa Masden, et al.
0

Informed by the basic geometry underlying feed forward neural networks, we initialize the weights of the first layer of a neural network using the linear discriminants which best distinguish individual classes. Networks initialized in this way take fewer training steps to reach the same level of training, and asymptotically have higher accuracy on training data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/18/2022

Global quantitative robustness of regression feed-forward neural networks

Neural networks are an indispensable model class for many complex learni...
research
06/13/2017

Transfer entropy-based feedback improves performance in artificial neural networks

The structure of the majority of modern deep neural networks is characte...
research
12/21/2021

NN2Poly: A polynomial representation for deep feed-forward artificial neural networks

Interpretability of neural networks and their underlying theoretical beh...
research
09/07/2021

On the space of coefficients of a Feed Forward Neural Network

We define and establish the conditions for `equivalent neural networks' ...
research
10/05/2021

On the Impact of Stable Ranks in Deep Nets

A recent line of work has established intriguing connections between the...
research
06/10/2022

An application of neural networks to a problem in knot theory and group theory (untangling braids)

We report on our success on solving the problem of untangling braids up ...
research
03/23/2021

Contrastive Reasoning in Neural Networks

Neural networks represent data as projections on trained weights in a hi...

Please sign up or login with your details

Forgot password? Click here to reset