Convergence Rates for Multi-classs Logistic Regression Near Minimum

12/08/2020
by   Dwight Nwaigwe, et al.
0

Training a neural network is typically done via variations of gradient descent. If a minimum of the loss function exists and gradient descent is used as the training method, we provide an expression that relates learning rate to the rate of convergence to the minimum. We also discuss existence of a minimum.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

Gradient Descent Converges Linearly for Logistic Regression on Separable Data

We show that running gradient descent with variable learning rate guaran...
research
08/15/2020

Correspondence between neuroevolution and gradient descent

We show analytically that training a neural network by stochastic mutati...
research
03/29/2019

A proof of convergence of multi-class logistic regression network

This paper revisits the special type of a neural network known under two...
research
02/10/2020

Super-efficiency of automatic differentiation for functions defined as a minimum

In min-min optimization or max-min optimization, one has to compute the ...
research
03/16/2023

Controlled Descent Training

In this work, a novel and model-based artificial neural network (ANN) tr...
research
03/10/2022

neos: End-to-End-Optimised Summary Statistics for High Energy Physics

The advent of deep learning has yielded powerful tools to automatically ...
research
03/16/2021

Learning without gradient descent encoded by the dynamics of a neurobiological model

The success of state-of-the-art machine learning is essentially all base...

Please sign up or login with your details

Forgot password? Click here to reset