Discontinuous Piecewise Polynomial Neural Networks

05/15/2015
by   John Loverich, et al.
0

An artificial neural network is presented based on the idea of connections between units that are only active for a specific range of input values and zero outside that range (and so are not evaluated outside the active range). The connection function is represented by a polynomial with compact support. The finite range of activation allows for great activation sparsity in the network and means that theoretically you are able to add computational power to the network without increasing the computational time required to evaluate the network for a given input. The polynomial order ranges from first to fifth order. Unit dropout is used for regularization and a parameter free weight update is used. Better performance is obtained by moving from piecewise linear connections to piecewise quadratic, even better performance can be obtained by moving to higher order polynomials. The algorithm is tested on the MAGIC Gamma ray data set as well as the MNIST data set.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2018

The SWAG Algorithm; a Mathematical Approach that Outperforms Traditional Deep Learning. Theory and Implementation

The performance of artificial neural networks (ANNs) is influenced by we...
research
06/20/2022

Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations

This paper investigates the approximation properties of deep neural netw...
research
07/21/2003

An Analytical Piecewise Radial Distortion Model for Precision Camera Calibration

The common approach to radial distortion is by the means of polynomial a...
research
05/17/2016

Combinatorially Generated Piecewise Activation Functions

In the neuroevolution literature, research has primarily focused on evol...
research
10/15/2020

Constructing Multilayer Perceptrons as Piecewise Low-Order Polynomial Approximators: A Signal Processing Approach

The construction of a multilayer perceptron (MLP) as a piecewise low-ord...
research
03/23/2023

Implicit Active Flux methods for linear advection

In this work we develop implicit Active Flux schemes for the scalar adve...
research
06/23/2016

An Approach to Stable Gradient Descent Adaptation of Higher-Order Neural Units

Stability evaluation of a weight-update system of higher-order neural un...

Please sign up or login with your details

Forgot password? Click here to reset