Choosing the right loss matters. Many common losses arise from likelihoods, such as the squared error loss from the normal distribution and the cross entropy loss from the softmax distribution. The same is true of regularizers, where arises from a normal prior and
from a Laplace prior. Losses derived from likelihoods turn the problem into a choice of distribution which can be informed by data noise and error tolerance. Standard losses and regularizers implicitly fix key distribution parameters, limiting flexibility. For instance, the squared error corresponds to fixing the normal variance at one. This work examines how to jointly optimize distribution parameters with model to select losses and regularizers that encourage generalization, calibration, and robustness to outliers. We explore three key likelihoods the normal, softmax, and the robust regressorlikelihood(barron2019loss). Additionally, we cast adaptive priors in the same light and introduce adaptive regularizers. In summary:
We systematically survey and evaluate global, data, and predicted likelihood parameters for robustness, adaptive regularization, and calibration.
We propose adaptive normal and Laplace priors on the model parameters.
We show that predicted likelihood parameters are efficient and effective.
Notation We consider a dataset of points and targets indexed by
. Targets for regression are real numbers and targets for classification are one-hot vectors. The modelwith parameters makes predictions . A loss measures the quality of the prediction given the target. To learn model parameters we solve the following loss optimization:
A likelihood measures the quality of the prediction as a distribution over given the target and likelihood parameters . We use to the negative log-likelihood (NLL), and the likelihood interchangeably since both have the same optima. We define the full likelihood optimization:
to jointly learn model and likelihood parameters. “Full” indicates the inclusion of , which control the distribution and induced NLL loss. We focus on full likelihood optimization in this work. We note that the target, , is the only supervision needed to optimize model and likelihood parameters, and respectively. Additionally though the shape and scale varies with , reducing the error always reduces the NLL for our distributions.
Distributions Under Investigation This work considers the normal likelihood with variance (bishop2006pattern; hastie2009elements), the softmax likelihood with temperature (hinton2015distilling), and the robust likelihood (barron2019loss) with shape and scale that control the scale and shape of the likelihood. Figure 1 shows how affects the Normal distribution and its NLL.
The normal likelihood has terms for the residual and the variance as
with scaling the distribution. The squared error is recovered from the corresponding NLL with .
The softmax defines a categorical distribution defined by scores for each class as
with the temperature, , adjusting the entropy of the distribution. The softmax NLL is The classification cross-entropy loss, , is recovered by substituting in the respective NLL. We state the gradients of these likelihoods with respect to their and in Section A of the supplement.
The robust loss and its likelihood are
with shape , scale , and normalization function . This likelihood generalizes the normal, Cauchy, and Student’s t distributions.
3 Related Work
Likelihood optimization follows from maximum likelihood estimation(hastie2009elements; bishop2006pattern)
, yet is uncommon in practice for fitting deep regressors and classifiers for discriminative tasks. However(kendall2017uncertainties; kendall2018multi; barron2019loss; saxena2019data) optimize likelihood parameters to their advantage yet differ in their tasks, likelihoods, and parameterizations. In this work we aim to systematically experiment, clarify usage, and encourage their wider adoption.
Early work on regressing means and variances (nix1994estimating) had the key insight that optimizing the full likelihood can fit these parameters and adapt the loss. Some recent works use likelihoods for loss adaptation, and interpret their parameters as the uncertainty (kendall2017uncertainties; kendall2018multi), robustness (kendall2017uncertainties; barron2019loss; saxena2019data), and curricula (saxena2019data) of losses. (barron2019loss) define a generalized robust regression loss, , to jointly optimize the type and degree of robustness with global, data-independent, parameters. (kendall2017uncertainties) predict variances for regression and classification to handle data-dependent uncertainty. (kendall2018multi) balance multi-task loss weights by optimizing variances for regression and temperatures for classification. These global parameters depend on the task but not the data, and are interpreted as inherent task uncertainty. (saxena2019data) define a differentiable curriculum for classification by assigning each training point its own temperature. These data parameters depend on the index of the data but not its value. We compare these different likelihood parametizations across tasks and distributions.
4 Likelihood Parameter Types
We explore the space of likelihood parameter representations for model optimization and inference. We note two key choices to make for representing , conditioning and dimensionality.
Conditioning We represent the likelihood parameters by three functional classes: global, data, and predicted. Global parameters, , are independent of the data and model and define the same likelihood distribution for all points. Data parameters, , are conditioned on the index, , of the data, , but not its value. Every training point is assigned an independent likelihood parameter, that define different likelihoods for each training point. Predicted parameters, , are determined by a model, , with parameters (not to be confused with the task model parameters ). Global and predicted parameters can be used during training and testing, but data parameters are only assigned to each training point and are undefined for testing. We show a simple example of predicted temperature in Figure 4, and a illustration of the parameter types in Figure 2.
Dimensionality The dimensionality, , of likelihood parameters can vary with the dimension of the task prediction, . For example, for image regression one can use a single likelihood parameter for each image , RGB image channel , or even every pixel . Dimensionality and Conditioning of likelihood parameters can interact. For example, data parameters with would result in additional parameters, where is the size of the dataset. This can complicate implementations and slow down optimization due to disk I/O when their size exceeds memory. Table 7 in the appendix contrasts the computational requirements of different likelihood parameter types.
5.1 Adaptive Regularization with Prior Parameters
We propose adaptive regularizers for the model parameters that optimize prior distribution parameters and tune the degree of regularization. The Normal (Ridge, L2) and Laplace (LASSO, L1) priors, with scale parameters and , regularize model parameters for small magnitude and sparsity respectively (hastie2009elements). The degree of regularization,
, is a hyperparameter of the regularized loss function:
We note that cannot be chosen by optimization, because it admits a trivial minimum at . In the linear case, one can select this weight efficiently using Least Angle Regression (efron2004least). However, in general is usually learned through expensive cross validation methods. Instead, we retain the prior with its scale parameter, and jointly optimize over the full likelihood:
This approach, the Dynamic Lasso (D-LASSO), admits no trivial solution for the prior parameter , and must balance the effective regularization strength with . D-LASSO allows the degree of regularization to be selected by gradient descent, rather than expensive black-box search. In Figure 3 (left) and (middle) we show that this approach, and its Ridge equivalent, yield ideal settings of the regularization strength. Figure 3 (right) shows D-LASSO converges to the best LASSO regularization strength for a variety of true-model sparsities. As a further extension, we replace the global or with a or for each to locally adapt regularization (Multi-Lasso). This consistently outperforms any global setting of the regularization strength and shields important weights from undue shrinkage 3 (middle).
5.2 Robust Modeling
Data in the wild is noisy, and machine learning models need to cope with input and label corruption. The standard mean squared error (MSE) loss is highly susceptible to outliers due to its fixed variance(huber2004robust). Likelihood parameters transform standard methods into robust methods without expensive outer-loop of model fitting such as RANSAC (fischler1981random) and Theil-Sen (theil1992rank). Figure 4 demonstrates this on a simple classification dataset and Figure 6 in the appendix compares our approach to other robust regressors. Likelihood parameters yield comparable results at a fraction of the cost. Furthermore, regressing variance improves accuracy as measured by MSE and NLL on the test set, and reduces the miscalibration (CAL) of the resulting regressors. Tables 3 and 4 in the appendix show this for linear and deep regressors on a suite of datasets used in (kuleshov2018accurate).
Often one must consider broader classes of likelihoods such as the general robust loss, (barron2019loss). We reproduce the variational auto-encoding (kingma2014adam) (VAE) experiments from (barron2019loss) on faces from the CelebA dataset (liu2015faceattributes), and compare with our model-based likelihood parameters, and data parameters (saxena2019data). We explore the two natural choices of parameter dimensionality, a single set of parameters for the whole image, and a set of parameters for each pixel and channel. More details on experimental conditions, datasets, and models are provided in Sections J and K in the appendix.
The work of (guo2017calibration) shows that modern networks are accurate, yet systematically overconfident, a phenomena called mis-calibration. We investigate the role of optimizing likelihood parameters to re-calibrate models. (guo2017calibration) introduce Global Scaling (GS), which re-calibrates classifiers with a learned global parameter, in the loss function: . (kuleshov2018accurate) use an Isotonic regressor to correct confidence of regressors. Vector Scaling (VS) (guo2017calibration), a multivariate generalization of Platt scaling (platt1999probabilistic), learns a vector,
, to re-weight logits:.
Using likelihood parameters we introduce three new re-calibration methods. Linear Scaling (LS) learns a linear mapping, , to transform logits to a softmax temperature: . Linear Feature Scaling (LFS), learns a linear mapping, , to transform the features prior to the logits, , to a softmax temperature: . Finally, Deep Scaling (DS) learns a nonlinear network, , to transform these features, , into a temperature: .
Table 2 evaluates the effectiveness of these approaches on recalibrating deep vision architectures across a variety of datasets as measured by the Expected Calibration Error (ECE) (guo2017calibration). LS and LFS tend to outperform both GS and VS. This demonstrates that richer likelihood parametrizations can improve calibration akin to how richer models can improve prediction. We choose GS as a baseline because (guo2017calibration)
show it outperforms Vector Scaling, Bayesian Binning into Quantiles, Histogram binning, and Isotonic Regression(zadrozny2001obtaining; zadrozny2002transforming; naeini2015obtaining). In Table 5 and Figure 8 in the appendix we show that these findings extend to regressors.
Optimizing the full likelihood can improve model quality by adapting losses and regularizers. Full likelihoods are agnostic to the architecture, optimizer, and task, which makes them simple substitutes for standard losses. Global, data, and predicted likelihood parameters offer different degrees of expressivity and efficiency. In particular, predicted parameters adapt the likelihood to each data point during training and testing without significant time and space overhead. More generally, we hope this work encourages joint optimization of model and likelihood parameters, and argue it is likely that your loss should be a likelihood.
Appendix A Gradient Optimization of Variance and Temperature
For completeness, we state the derivative of the normal NLL with respect to the variance (9) and the derivative of the softmax NLL with respect to the temperature (10). The normal NLL is well-known, and its gradient w.r.t. the variance was first used to fit networks for heteroskedastic regression (nix1994estimating) and mixture modeling (bishop1994mixture). The softmax with temperature is less widely appreciated, and we are not aware of a reference for its gradient w.r.t. the temperature. For this derivative, recall that the softmax is the gradient of , and see (4).
Appendix B Understanding Where Uncertainty is Modelled
For classifiers with a parametrized temperature, models have the “choice” to store uncertainty information in either the model that can reduce the size of the logit vector, , or the likelihood parameters which can scale the temperature parameter, . Note that for regressors, this information can only be stored in . The fact that uncertainty information is split between the model and likelihood can sometimes make it difficult to interpret temperature as the sole detector of outliers. In particular, if the uncertainty parameters, , train slower than the model parameters, the network might find it advantageous to move critical uncertainty information to the model. This effect is illustrated by Figure 5, which shows that with data parameters on a large dataset, the uncertainty required to detect outliers moves into the model.
Appendix C Robust Tabular Regression
In addition to investigating how parametrized likelihoods affect deep models, we also perform the same comparisons and experiments for linear models. In this domain, we find that the same properties still hold, and because of the limited number of parameters, these models often benefit significantly more from likelihood parameters. In Figure 6 we show that parametrizing Gaussian scale leads to similar robustness properties as the Theil-Sen, RANSAC, and Huber methods at a fraction of the computational cost. Furthermore, we note that likihood parameters could also be applied to Huber regression to adjust its scale as is the case for Normal variance, . In Table 3 we show that adding a learned normal scale regressor to linear regression can improve calibration and generalization.
Appendix D Visualizing Predicted Shape and Scale for Auto-Encoding
In Section 5.2 we experiment with variational auto-encoding by the generalized robust loss . The likelihood corresponding to has parameters for shape () and scale (). When these parameters are predicted, by regressing them as part of the model, they can vary with the input to locally adapt the loss. In Figure 7 we visualize the regressed shape and scale of each pixel for images in the CelebA validation set. Note that only predicted likelihood parameters can vary in this way, since data parameters are not defined during testing.
Appendix E Calibrating Regressors
The results of Table 5 also hold for linear models, though the improvements are less significant as linear regressors often do not have the capacity to over-fit and miscalibrate. Table 6 show these results.
Appendix F Deep Regressors Miscalibrate
The work of (kuleshov2018accurate) establishes Isotonic regression as a natural baseline for regressors. In our experiments on classifiers in Table 2, and on regressors in table 5, we found that additional likelihood modeling capacity for temperature and scale was beneficial for re-calibration across several datasets. This demonstrates that the space of deep network calibration is rich, but with an inductive bias towards likelihood parametrization. We also find that deep regressors suffer from the same over-confidence issue that plagues deep classifiers (guo2017calibration). Figure 8 in the supplement shows this effect. More details on experimental conditions, datasets, and models are provided in sections J and K in the Supplement.
We confirm an that deep classifiers miscalibrate as a function of layer size, which is analogous to the results for classifiers found by (guo2017calibration). We plot regression calibration error as a function of network layer size for a simple single layer deep network on a synthetic regression dataset. As layer size grows the network has the capacity to over-fit the training data and hence under-estimate’s its own errors. We note that this effect appears on the other datasets reported, and across other network architectures. We posit that overconfidence will occur with any likelihood distribution as this is a symptom of over-fitting.
Appendix G Improving Optimization Stability
When modelling Softmax temperatures and Gaussian scales with dedicated networks we encountered significant instability due to exploding gradients at low temperature scales and vanishing gradients at high temperature scales. We discovered that it was desire-able to have a function mapping that was smooth, everywhere-differentiable, and bounded slightly above to avoid exploding gradients. Though other works use the exponential function to map “temperature logits” to , we found that accidental exponential temperature growth could squash gradients. When using exponentiation, a canonical instability arose when the network overstepped towards due to momentum, then swung dramatically towards . Here, it lost all gradients and could not recover.
To counteract this behavior we employ a shifted softplus function that is re normalized so :
where is a small constant that serves as a smooth lower bound to the temperature. This function decays to when , and increases linearly when
, hence avoids the vanishing gradient problem. Figure9 demonstrates the importance of the offset in Equation 11 with ResNet50 on Cifar100 calibration. We also found it important to normalize the features before processing with the layers dedicated to scales and temperatures. For shifted softmax offsets, we frequently employ .
Appendix H Space and Time Characteristics of Likelihood Parameters
Appendix I Outlier Detection
Likelihood parameter prediction not only improves optimization, but can also serve useful purposes during inference. In particular, we show that the likelihood parameters can be thresholded to detect outliers at both training and testing time. Auditing the model’s temperature or noise parameters can help practitioners spot erroneous labels and poor quality examples. Figure 10 in the Supplement shows that temperature correlates strongly with blurry, dark and difficult examples on the Street View House Number (SVHN) dataset. If the likelihood parameter network has access to both X and Y, it can also detect errors in labelling, and we found several mis-labelled data in the SVHN dataset with this approach.
The choice of conditioner determines how likelihood parameters can be used at test time. Data parameters are only defined for the training data, and do not apply to test data. This can cause poor calibration and other artifacts during inference. Furthermore, classifiers have a “choice” to store uncertainty in the likelihood parameters, , or the task model . On large mini-batched datasets, data parameters receive fewer updates, and uncertainty information moves into the model. This can render data parameters less effective for outlier detection and we show this effect in Figure 5 of the supplement.
We quantitatively compare the ability of likelihood parameters to separate outliers from un-corrupted data against several existing outlier detection methods. We evaluate the Elliptic Envelope (EE) (rousseeuw1999fast)
, 1-Class Support Vector Machine (1SVM)(scholkopf2000support), Isolation Forest (liu2008isolation), and the Local Outlier Factor (LOF) (breunig2000lof) trained on both features and labels. Our introduced methods include model (MP), and Data (DP) parametrized normal scales with Linear (L) and Deep (D) regressors. Likelihood parameters outperform baselines across these datasets.
For “ground truth” outlier labels we corrupt datasets without changing high-level dataset statistics. More specifically, we append noisy rows to the dataset formed by taking the data where of the scaled features and randomly re-sampling new labels from the full dataset. This jointly corrupts the data, , and target, , for a small fraction of the dataset. We found that our conclusions held with other types of corruptions such as randomly scaling inputs and labels, or adding noise. We treat outlier-detection as a supervised classification problem by reporting the area under the Precision-Recall curve for outlier scores.
Appendix J Datasets
The regression datasets used are sourced from the UCI Machine Learning repository (dua2017uci), (soil) (soil), and (fried)
(fried). Inputs and targets are scaled to unit norm and variance prior to fitting for all regression experiments and missing values are imputed using scikit-learn’s “SimpleImputer”(pedregosa2011scikit)
. Large-scale image classification experiments leverage Tensorflow’s Dataset APIs that include the SVHN,(netzer2011reading)
, ImageNet(deng2009imagenet), CIFAR-100, CIFAR-10 (krizhevsky2009learning), and CelebA (liu2015faceattributes)
datasets. The datasets used in ridge and lasso experiments are 500 samples of 500 dimensional normal distributions mapped through linear functions with additive gaussian noise. Linear transformations useweights and LASSO experiments use sparse transformations.
Appendix K Models
Many likelihood parameters have constrained domains, such as the normal variance . To evade the complexity of constrained optimization, we define unconstrained parameters and choose a transformation with inverse to map to and from the constrained . For positivity, / parameterization is standard (kendall2017uncertainties; kendall2018multi; saxena2019data). However, this parameterization can lead to instabilities and we use the softplus, , instead. Shifting the softplus further improves stability (see Figure 9). For the constrained interval we use affine transformations of the sigmoid (barron2019loss).
Like model parameters, likelihood parameters can be regularized. For unsupervised experiments we consider weight decay, and learning rate scaling for learning rate and multiplier . We inherit the existing setting of weight decay for model parameters and clip at gradient norms at . We set the learning rate scaling multiplier to .
The regressor can take the data, , the prediction, , or a representation from the task model, . Choosing , makes the regressor independent of the task model, and using , makes the regressor independent of the data given the task model. The latter is often referred to as meta-recognition (scheirer2011meta). Using an intermediate representation strikes a balance between the data and the task.
We focus on the deepest feature representation as input, from the penultimate layer of the network. This representation has a linear relationship to the prediction, in addition to a higher dimension, and should yield information on both prediction and data.
The regressor can be a shallow, linear function or a deep, nonlinear function. To keep the likelihood prediction closely related to the task prediction, we only consider linear regressors and one to two-layer nonlinear regressors. Given the task model representation, the likelihood regressor is still a nonlinear function of the data, even if the regressor itself is linear.
Normalization can help control for differences in the dimension and magnitude of data and features. We normalize the input to the regressor to simplify application to different tasks, stabilize convergence, and promote invariance to the dimension and magnitude of the input. While there are a number of normalization schemes to choose from, we select L2 normalization because it is efficient and deterministic.
The regressor parameters,
, are optimized by backpropagation through the regressed likelihood parameters. The weights in are initialized by the standard Glorot (glorot2010understanding) or He (he2015delving) techniques with mean zero. The biases in are initialized by the inverse parameter constraint function, , to the desired setting of . The default for variance and temperature is , for equality with the usual squared error and softmax cross-entropy.
Regressor learning can be end-to-end or isolated. In end-to-end learning, the gradient w.r.t. the likelihood is backpropagated to the regressor’s input. Whereas in isolated learning, the gradient is stopped at the input of the likelihood parameter model. Isolated learning of predicted parameters is closer to learning global and data parameters, which are independent of the task model, and do not affect model parameters.
k.6 Experimental Details
for 3000 steps without minibatching. Deep regressors have a single hidden layer with 10 neurons, and recalibrated regressors have 2 hidden layers. We constrain Normal variances and softmax temperatures using the affine softplusand respectively. Adaptive regularizer scales use parametization. We run experiments on Ubuntu 16.04 Azure Standard NV24 virtual machines (24 CPUs, 224 Gb memory, and 4 M60 GPUs) with Tensorflow 1.15 (tensorflow).
In our VAE experiments, our likelihood parameter model is a convolution on last hidden layer of the decoder, which has the same resolution as the output. The low and high dimensional losses use the same convolutional regressor, but the 1 dimensional case averages over pixels. In the high dimensional case, the output has three channels (for RGB), with six channels total for shape and scale regression. We use the same non-linearities to constrain the shape and scale outputs to reasonable ranges as in (barron2019loss): an affine sigmoid to keep the shape and the softplus to keep scale . Table 1 gives the results of evaluating each method by MSE on the validation set, while training each method with their respective loss parameters.