Extreme Classification via Adversarial Softmax Approximation

02/15/2020
by   Robert Bamler, et al.
20

Training a classifier over a large number of classes, known as 'extreme classification', has become a topic of major interest with applications in technology, science, and e-commerce. Traditional softmax regression induces a gradient cost proportional to the number of classes C, which often is prohibitively expensive. A popular scalable softmax approximation relies on uniform negative sampling, which suffers from slow convergence due a poor signal-to-noise ratio. In this paper, we propose a simple training method for drastically enhancing the gradient signal by drawing negative samples from an adversarial model that mimics the data distribution. Our contributions are three-fold: (i) an adversarial sampling mechanism that produces negative samples at a cost only logarithmic in C, thus still resulting in cheap gradient updates; (ii) a mathematical proof that this adversarial sampling minimizes the gradient variance while any bias due to non-uniform sampling can be removed; (iii) experimental results on large scale data sets that show a reduction of the training time by an order of magnitude relative to several competitive baselines.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/24/2019

Sampled Softmax with Random Fourier Features

The computational cost of training with softmax cross entropy loss grows...
research
12/31/2020

A Constant-time Adaptive Negative Sampling

Softmax classifiers with a very large number of classes naturally occur ...
research
07/10/2018

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

Detecting test samples drawn sufficiently far away from the training dis...
research
04/16/2015

Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields

We apply stochastic average gradient (SAG) algorithms for training condi...
research
09/23/2016

One-vs-Each Approximation to Softmax for Scalable Estimation of Probabilities

The softmax representation of probabilities for categorical variables pl...
research
12/11/2021

Determinantal point processes based on orthogonal polynomials for sampling minibatches in SGD

Stochastic gradient descent (SGD) is a cornerstone of machine learning. ...
research
01/24/2017

By chance is not enough: Preserving relative density through non uniform sampling

Dealing with visualizations containing large data set is a challenging i...

Please sign up or login with your details

Forgot password? Click here to reset