DeepAI AI Chat
Log In Sign Up

Be Careful What You Backpropagate: A Case For Linear Output Activations & Gradient Boosting

by   Anders Oland, et al.
Carnegie Mellon University

In this work, we show that saturating output activation functions, such as the softmax, impede learning on a number of standard classification tasks. Moreover, we present results showing that the utility of softmax does not stem from the normalization, as some have speculated. In fact, the normalization makes things worse. Rather, the advantage is in the exponentiation of error gradients. This exponential gradient boosting is shown to speed up convergence and improve generalization. To this end, we demonstrate faster convergence and better performance on diverse classification tasks: image classification using CIFAR-10 and ImageNet, and semantic segmentation using PASCAL VOC 2012. In the latter case, using the state-of-the-art neural network architecture, the model converged 33 with the standard softmax activation, and with a slightly better performance to boot.


page 1

page 2

page 3

page 4


Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)

We introduce the "exponential linear unit" (ELU) which speeds up learnin...

Revisiting lp-constrained Softmax Loss: A Comprehensive Study

Normalization is a vital process for any machine learning task as it con...

Online Normalization for Training Neural Networks

Online Normalization is a new technique for normalizing the hidden activ...

Gradient Normalization & Depth Based Decay For Deep Learning

In this paper we introduce a novel method of gradient normalization and ...

Fast, Better Training Trick -- Random Gradient

In this paper, we will show an unprecedented method to accelerate traini...

A Multiple Classifier Approach for Concatenate-Designed Neural Networks

This article introduces a multiple classifier method to improve the perf...

The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling

There is an increasing interest in the application of deep learning arch...