Understanding Sinusoidal Neural Networks

12/04/2022
by   Tiago Novello, et al.
0

In this work, we investigate the representation capacity of multilayer perceptron networks that use the sine as activation function - sinusoidal neural networks. We show that the layer composition in such networks compacts information. For this, we prove that the composition of sinusoidal layers expands as a sum of sines consisting of a large number of new frequencies given by linear combinations of the weights of the network's first layer. We provide the expression of the corresponding amplitudes in terms of the Bessel functions and give an upper bound for them that can be used to control the resulting approximation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/04/2019

On the Approximation Properties of Neural Networks

We prove two new results concerning the approximation properties of neur...
research
02/10/2018

Generalization of an Upper Bound on the Number of Nodes Needed to Achieve Linear Separability

An important issue in neural network research is how to choose the numbe...
research
07/26/2023

Understanding Deep Neural Networks via Linear Separability of Hidden Layers

In this paper, we measure the linear separability of hidden layer output...
research
01/02/2019

The capacity of feedforward neural networks

A long standing open problem in the theory of neural networks is the dev...
research
03/13/2023

Bayes Complexity of Learners vs Overfitting

We introduce a new notion of complexity of functions and we show that it...
research
06/02/2023

Chemical Property-Guided Neural Networks for Naphtha Composition Prediction

The naphtha cracking process heavily relies on the composition of naphth...
research
11/04/2020

Kernel Dependence Network

We propose a greedy strategy to spectrally train a deep network for mult...

Please sign up or login with your details

Forgot password? Click here to reset