Convolutional networks and learning invariant to homogeneous multiplicative scalings

06/26/2015
by   Mark Tygert, et al.
0

The conventional classification schemes -- notably multinomial logistic regression -- used in conjunction with convolutional networks (convnets) are classical in statistics, designed without consideration for the usual coupling with convnets, stochastic gradient descent, and backpropagation. In the specific application to supervised learning for convnets, a simple scale-invariant classification stage turns out to be more robust than multinomial logistic regression, appears to result in slightly lower errors on several standard test sets, has similar computational costs, and features precise control over the actual rate of learning. "Scale-invariant" means that multiplying the input values by any nonzero scalar leaves the output unchanged.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/26/2023

Gradient Descent Converges Linearly for Logistic Regression on Separable Data

We show that running gradient descent with variable learning rate guaran...
research
01/18/2018

When Does Stochastic Gradient Algorithm Work Well?

In this paper, we consider a general stochastic optimization problem whi...
research
04/16/2021

Affine-invariant ensemble transform methods for logistic regression

We investigate the application of ensemble transform approaches to Bayes...
research
07/24/2017

Stochastic Gradient Descent for Relational Logistic Regression via Partial Network Crawls

Research in statistical relational learning has produced a number of met...
research
10/20/2018

Condition Number Analysis of Logistic Regression, and its Implications for Standard First-Order Solution Methods

Logistic regression is one of the most popular methods in binary classif...
research
05/11/2018

Stochastic Approximation EM for Logistic Regression with Missing Values

Logistic regression is a common classification method in supervised lear...

Please sign up or login with your details

Forgot password? Click here to reset