Generalised Perceptron Learning

12/07/2020
by   Xiaoyu Wang, et al.
0

We present a generalisation of Rosenblatt's traditional perceptron learning algorithm to the class of proximal activation functions and demonstrate how this generalisation can be interpreted as an incremental gradient method applied to a novel energy function. This novel energy function is based on a generalised Bregman distance, for which the gradient with respect to the weights and biases does not require the differentiation of the activation function. The interpretation as an energy minimisation algorithm paves the way for many new algorithms, of which we explore a novel variant of the iterative soft-thresholding algorithm for the learning of sparse perceptrons.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/06/2019

Auto-Rotating Perceptrons

This paper proposes an improved design of the perceptron unit to mitigat...
research
08/24/2017

Logistic Regression as Soft Perceptron Learning

We comment on the fact that gradient ascent for logistic regression has ...
research
07/29/2021

Otimizacao de pesos e funcoes de ativacao de redes neurais aplicadas na previsao de series temporais

Neural Networks have been applied for time series prediction with good e...
research
06/24/2022

A Deep Learning Approach to Nonconvex Energy Minimization for Martensitic Phase Transitions

We propose a mesh-free method to solve nonconvex energy minimization pro...
research
11/05/2018

Lifted Proximal Operator Machines

We propose a new optimization method for training feed-forward neural ne...
research
11/05/2013

Polyhedrons and Perceptrons Are Functionally Equivalent

Mathematical definitions of polyhedrons and perceptron networks are disc...
research
09/12/2023

A Perceptron-based Fine Approximation Technique for Linear Separation

This paper presents a novel online learning method that aims at finding ...

Please sign up or login with your details

Forgot password? Click here to reset