The Z-loss: a shift and scale invariant classification loss belonging to the Spherical Family

04/29/2016
by   Alexandre de Brébisson, et al.
0

Despite being the standard loss function to train multi-class neural networks, the log-softmax has two potential limitations. First, it involves computations that scale linearly with the number of output classes, which can restrict the size of problems we are able to tackle with current hardware. Second, it remains unclear how close it matches the task loss such as the top-k error rate or other non-differentiable evaluation metrics which we aim to optimize ultimately. In this paper, we introduce an alternative classification loss function, the Z-loss, which is designed to address these two issues. Unlike the log-softmax, it has the desirable property of belonging to the spherical loss family (Vincent et al., 2015), a class of loss functions for which training can be performed very efficiently with a complexity independent of the number of output classes. We show experimentally that it significantly outperforms the other spherical loss functions previously investigated. Furthermore, we show on a word language modeling task that it also outperforms the log-softmax with respect to certain ranking scores, such as top-k scores, suggesting that the Z-loss has the flexibility to better match the task loss. These qualities thus makes the Z-loss an appealing candidate to train very efficiently large output networks such as word-language models or other extreme classification problems. On the One Billion Word (Chelba et al., 2014) dataset, we are able to train a model with the Z-loss 40 times faster than the log-softmax and more than 4 times faster than the hierarchical softmax.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2015

An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family

In a multi-class classification problem, it is standard to model the out...
research
11/23/2020

Exploring Alternatives to Softmax Function

Softmax function is widely used in artificial neural networks for multic...
research
12/23/2021

Sparse-softmax: A Simpler and Faster Alternative Softmax Transformation

The softmax function is widely used in artificial neural networks for th...
research
06/26/2016

Exact gradient updates in time independent of output size for the spherical loss family

An important class of problems involves training deep neural networks wi...
research
03/31/2023

A two-head loss function for deep Average-K classification

Average-K classification is an alternative to top-K classification in wh...
research
12/22/2014

Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets

An important class of problems involves training deep neural networks wi...
research
12/01/2015

Loss Functions for Top-k Error: Analysis and Insights

In order to push the performance on realistic computer vision tasks, the...

Please sign up or login with your details

Forgot password? Click here to reset