The Concept of Forward-Forward Learning Applied to a Multi Output Perceptron

04/06/2023
by   K. Fredrik Karlsson, et al.
0

The concept of a recently proposed Forward-Forward learning algorithm for fully connected artificial neural networks is applied to a single multi output perceptron for classification. The parameters of the system are trained with respect to increased (decreased) "goodness" for correctly (incorrectly) labelled input samples. Basic numerical tests demonstrate that the trained perceptron effectively deals with data sets that have non-linear decision boundaries. Moreover, the overall performance is comparable to more complex neural networks with hidden layers. The benefit of the approach presented here is that it only involves a single matrix multiplication.

READ FULL TEXT
research
11/13/2015

Large Scale Artificial Neural Network Training Using Multi-GPUs

This paper describes a method for accelerating large scale Artificial Ne...
research
02/04/2022

A note on the complex and bicomplex valued neural networks

In this paper we first write a proof of the perceptron convergence algor...
research
12/11/2014

Simulating a perceptron on a quantum computer

Perceptrons are the basic computational unit of artificial neural networ...
research
11/05/2013

Polyhedrons and Perceptrons Are Functionally Equivalent

Mathematical definitions of polyhedrons and perceptron networks are disc...
research
12/21/2020

A Network of Localized Linear Discriminants

The localized linear discriminant network (LLDN) has been designed to ad...
research
09/03/2020

Bayesian Perceptron: Towards fully Bayesian Neural Networks

Artificial neural networks (NNs) have become the de facto standard in ma...
research
09/19/2015

A Fuzzy MLP Approach for Non-linear Pattern Classification

In case of decision making problems, classification of pattern is a comp...

Please sign up or login with your details

Forgot password? Click here to reset