Estimating Multiplicative Relations in Neural Networks

10/28/2020
by   Bhaavan Goel, et al.
0

Universal approximation theorem suggests that a shallow neural network can approximate any function. The input to neurons at each layer is a weighted sum of previous layer neurons and then an activation is applied. These activation functions perform very well when the output is a linear combination of input data. When trying to learn a function which involves product of input data, the neural networks tend to overfit the data to approximate the function. In this paper we will use properties of logarithmic functions to propose a pair of activation functions which can translate products into linear expression and learn using backpropagation. We will try to generalize this approach for some complex arithmetic functions and test the accuracy on a disjoint distribution with the training set.

READ FULL TEXT
research
12/06/2020

The universal approximation theorem for complex-valued neural networks

We generalize the classical universal approximation theorem for neural n...
research
12/22/2019

Universal Hysteresis Identification Using Extended Preisach Neural Network

Hysteresis phenomena have been observed in different branches of physics...
research
10/09/2020

Neural Random Projection: From the Initial Task To the Input Similarity Problem

In this paper, we propose a novel approach for implicit data representat...
research
10/04/2021

Universal approximation properties of shallow quadratic neural networks

In this paper we propose a new class of neural network functions which a...
research
07/11/2023

Using Linear Regression for Iteratively Training Neural Networks

We present a simple linear regression based approach for learning the we...
research
01/18/2021

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples

In this paper we approach the problem of unique and stable identifiabili...
research
02/18/2020

A Neural Network Based on First Principles

In this paper, a Neural network is derived from first principles, assumi...

Please sign up or login with your details

Forgot password? Click here to reset