Training-Free Artificial Neural Networks

09/30/2019
by   Nikolaos P. Bakas, et al.
0

We present a numerical scheme for the computation of Artificial Neural Networks' weights, without a laborious iterative procedure. The proposed algorithm adheres to the underlying theory, is highly fast, and results in remarkably low errors when applied for regression and classification of complex data-sets, such as the Griewank function of multiple variables x∈R^100 with random noise addition, and MNIST database for handwritten digits recognition, with 7×10^4 images. Interestingly, the same mathematical formulation found capable of approximating highly nonlinear functions in multiple dimensions, with low errors (e.g. 10^-10) for the test set of the unknown functions, their higher-order partial derivatives, as well as numerically solving Partial Differential Equations. The method is based on the calculation of the weights of each neuron, in small neighborhoods of data, such that the corresponding local approximation matrix is invertible. Accordingly, the hyperparameters optimization is not necessary, as the neurons' number stems directly from the dimensions of the data, further improving the algorithmic speed. The overfitting is inherently eliminated, and the results are interpretable and reproducible. The complexity of the proposed algorithm is of class P with O(mn^3) computing time, that is linear for the observations and cubic for the features, in contrast with the NP-Complete class of standard algorithms for training ANNs. The performance of the method is high, for small as well as big datasets, and the test-set errors are similar or smaller than the train errors indicating the generalization efficiency. The supplementary computer code in Julia Language, may reproduce the validation examples, and run for other data-sets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/02/2021

Accelerating Algebraic Multigrid Methods via Artificial Neural Networks

We present a novel Deep Learning-based algorithm to accelerate - through...
research
12/12/2020

Solving for high dimensional committor functions using neural network with online approximation to derivatives

This paper proposes a new method based on neural networks for computing ...
research
12/14/2017

Neural networks catching up with finite differences in solving partial differential equations in higher dimensions

Fully connected multilayer perceptrons are used for obtaining numerical ...
research
09/30/2019

Taylor Polynomials in High Arithmetic Precision as Universal Approximators

Function approximation is a generic process in a variety of computationa...
research
06/21/2022

Artificial Neural Network evaluation of Poincaré constant for Voronoi polygons

We propose a method, based on Artificial Neural Networks, that learns th...
research
01/24/2020

PairNets: Novel Fast Shallow Artificial Neural Networks on Partitioned Subspaces

Traditionally, an artificial neural network (ANN) is trained slowly by a...

Please sign up or login with your details

Forgot password? Click here to reset