Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting

07/13/2017
by   Anders Oland, et al.
0

In this work, we show that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks. Moreover, we present results showing that the utility of softmax does not stem from the normalization, as some have speculated. In fact, the normalization makes things worse. Rather, the advantage is in the exponentiation of error gradients. This exponential gradient boosting is shown to speed up convergence and improve generalization. To this end, we demonstrate faster convergence and better performance on diverse classification tasks: image classification using CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the latter case, using the state-of-the-art neural network architecture, the model converged 33 with the standard softmax activation, and with a slightly better performance to boot.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/23/2015

Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

We introduce the "exponential linear unit" (ELU) which speeds up learnin...
research
05/15/2019

Online Normalization for Training Neural Networks

Online Normalization is a new technique for normalizing the hidden activ...
research
06/20/2022

Revisiting lp-constrained Softmax Loss: A Comprehensive Study

Normalization is a vital process for any machine learning task as it con...
research
08/13/2018

Fast, Better Training Trick -- Random Gradient

In this paper, we will show an unprecedented method to accelerate traini...
research
01/14/2021

A Multiple Classifier Approach for Concatenate-Designed Neural Networks

This article introduces a multiple classifier method to improve the perf...
research
01/01/2022

The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling

There is an increasing interest in the application of deep learning arch...
research
12/10/2017

Gradient Normalization & Depth Based Decay For Deep Learning

In this paper we introduce a novel method of gradient normalization and ...

Please sign up or login with your details

Forgot password? Click here to reset