Log In Sign Up

Fine-tune your Classifier: Finding Correlations With Temperature

by   Benjamin Chamand, et al.

Temperature is a widely used hyperparameter in various tasks involving neural networks, such as classification or metric learning, whose choice can have a direct impact on the model performance. Most of existing works select its value using hyperparameter optimization methods requiring several runs to find the optimal value. We propose to analyze the impact of temperature on classification tasks by describing a dataset as a set of statistics computed on representations on which we can build a heuristic giving us a default value of temperature. We study the correlation between these extracted statistics and the observed optimal temperatures. This preliminary study on more than a hundred combinations of different datasets and features extractors highlights promising results towards the construction of a general heuristic for temperature.


page 1

page 2

page 3

page 4


Warming trend in cold season of the Yangtze River Delta and its correlation with Siberian high

Based on the meteorological data from 1960 to 2010, we investigated the ...

Contextual Temperature for Language Modeling

Temperature scaling has been widely used as an effective approach to con...

Massively Scaling Heteroscedastic Classifiers

Heteroscedastic classifiers, which learn a multivariate Gaussian distrib...

Temperature check: theory and practice for training models with softmax-cross-entropy losses

The softmax function combined with a cross-entropy loss is a principled ...

Cold Posteriors and Aleatoric Uncertainty

Recent work has observed that one can outperform exact inference in Baye...

DARTS for Inverse Problems: a Study on Hyperparameter Sensitivity

Differentiable architecture search (DARTS) is a widely researched tool f...

Direct observation of a dynamical glass transition in a nanomagnetic artificial Hopfield network

Spin glasses, generally defined as disordered systems with randomized co...