One-class systems seamlessly fit in the forward-forward algorithm

06/27/2023
by   Michael Hopwood, et al.
0

The forward-forward algorithm presents a new method of training neural networks by updating weights during an inference, performing parameter updates for each layer individually. This immediately reduces memory requirements during training and may lead to many more benefits, like seamless online training. This method relies on a loss ("goodness") function that can be evaluated on the activations of each layer, of which can have a varied parameter size, depending on the hyperparamaterization of the network. In the seminal paper, a goodness function was proposed to fill this need; however, if placed in a one-class problem context, one need not pioneer a new loss because these functions can innately handle dynamic network sizes. In this paper, we investigate the performance of deep one-class objective functions when trained in a forward-forward fashion. The code is available at <https://github.com/MichaelHopwood/ForwardForwardOneclass>.

READ FULL TEXT
research
01/20/2019

Training Neural Networks with Local Error Signals

Supervised training of neural networks for classification is typically p...
research
08/01/2022

What Can Transformers Learn In-Context? A Case Study of Simple Function Classes

In-context learning refers to the ability of a model to condition on a p...
research
04/12/2021

An Efficient 2D Method for Training Super-Large Deep Learning Models

Huge neural network models have shown unprecedented performance in real-...
research
10/29/2021

Hyperparameter Tuning is All You Need for LISTA

Learned Iterative Shrinkage-Thresholding Algorithm (LISTA) introduces th...
research
07/05/2022

PoF: Post-Training of Feature Extractor for Improving Generalization

It has been intensively investigated that the local shape, especially fl...
research
07/09/2023

Extending the Forward Forward Algorithm

The Forward Forward algorithm, proposed by Geoffrey Hinton in November 2...
research
06/26/2020

Relative gradient optimization of the Jacobian term in unsupervised deep learning

Learning expressive probabilistic models correctly describing the data i...

Please sign up or login with your details

Forgot password? Click here to reset