# Analysis and Optimization of Loss Functions for Multiclass, Top-k, and Multilabel Classification

Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.

## Authors

• 5 publications
• 65 publications
• 107 publications
• ### Loss Functions for Top-k Error: Analysis and Insights

In order to push the performance on realistic computer vision tasks, the...
12/01/2015 ∙ by Maksim Lapin, et al. ∙ 0

• ### Learning with Smooth Hinge Losses

Due to the non-smoothness of the Hinge loss in SVM, it is difficult to o...
02/27/2021 ∙ by Junru Luo, et al. ∙ 0

• ### Learning rates for classification with Gaussian kernels

This paper aims at refined error analysis for binary classification usin...
02/28/2017 ∙ by Shao-Bo Lin, et al. ∙ 0

• ### Partial FC: Training 10 Million Identities on a Single Machine

Face recognition has been an active and vital topic among computer visio...
10/11/2020 ∙ by Xiang An, et al. ∙ 15

• ### Analysis of Softmax Approximation for Deep Classifiers under Input-Dependent Label Noise

Modelling uncertainty arising from input-dependent label noise is an inc...
03/15/2020 ∙ by Mark Collier, et al. ∙ 5

• ### Revisiting the Loss Weight Adjustment in Object Detection

By definition, object detection requires a multi-task loss in order to s...
03/17/2021 ∙ by Wenxin Yu, et al. ∙ 0

• ### Beyond L2-Loss Functions for Learning Sparse Models

Incorporating sparsity priors in learning tasks can give rise to simple,...
03/26/2014 ∙ by Karthikeyan Natesan Ramamurthy, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Modern computer vision benchmarks are large scale [1, 2, 3], and are only likely to grow further both in terms of the sample size as well as the number of classes. While simply collecting more data may be a relatively straightforward exercise, obtaining high quality ground truth annotation is hard. Even when the annotation is just a list of image level tags, collecting a consistent and exhaustive list of labels for every image requires significant effort. Instead, existing benchmarks often offer only a single label per image, albeit the images may be inherently multilabel. The increased number of classes then leads to ambiguity in the labels as classes start to overlap or exhibit a hierarchical structure. The issue is illustrated in Figure 1, where it is difficult even for humans to guess the ground truth label correctly on the first attempt [2, 4].

Allowing guesses instead of one leads to what we call the top- error, which is one of the main subjects of this work. While previous research is focused on minimizing the top- error, we consider . We are mainly interested in two cases: (i) achieving small top- error for all simultaneously; and (ii) minimization of a specific top- error. These goals are pursued in the first part of the paper which is concerned with single label multiclass classification. We propose extensions of the established multiclass loss functions to address top- error minimization and derive appropriate optimization schemes based on stochastic dual coordinate ascent (SDCA) [5]. We analyze which of the multiclass methods are calibrated for the top- error and perform an extensive empirical evaluation to better understand their benefits and limitations. An earlier version of this work appeared in [6].

Moving forward, we see top- classification as a natural transition step between multiclass learning with a single label per training example and multilabel learning with a complete set of relevant labels. Multilabel learning forms the second part of this work, where we introduce a smoothed version of the multilabel SVM loss [7], and contribute two novel projection algorithms for efficient optimization of multilabel losses in the SDCA framework. Furthermore, we compare all multiclass, top-, and multilabel methods in a novel experimental setting, where we want to quantify the utility of multilabel annotation. Specifically, we want to understand if it is possible to obtain effective multilabel classifiers from single label annotation.

The contributions of this work are as follows.

• In § 2, we provide an overview of the related work and establish connections to a number of related research directions. In particular, we point to an intimate link that exists between top- classification, label ranking, and learning to rank in information retrieval.

• In § 3, we introduce the learning problem for multiclass and multilabel classification, and discuss the respective performance metrics. We also propose novel loss functions for minimizing the top- error and a novel smooth multilabel SVM loss. A brief summary of the methods that we consider is given in Table I.

• In § 4, we introduce the notion of top- calibration and analyze which of the multiclass methods are calibrated for the top- error. In particular, we highlight that the softmax loss is uniformly top- calibrated for all .

• In § 5, we develop efficient optimization schemes based on the SDCA framework. Specifically, we contribute a set of algorithms for computing the proximal maps that can be used to train classifiers with the specified multiclass, top-, and multilabel loss functions.

• In § 6, the methods are evaluated empirically in three different settings: on synthetic data (§ 6.1), on multiclass datasets (§ 6.2), and on multilabel datasets (§ 6.3).

• In § 6.2, we perform a set of experiments on multiclass benchmarks including the ImageNet 2012 [1] and the Places 205 [2] datasets. Our evaluation reveals, in particular, that the softmax loss and the proposed smooth loss are competitive uniformly in all top- errors, while improvements for a specific can be obtained with the new top- losses.

• In § 6.3, we evaluate the multilabel methods on datasets following [11], where our smooth multilabel shows particularly encouraging results. Next, we perform experiments on Pascal VOC 2007 [12] and Microsoft COCO [3], where we train multiclass and top- methods using only a single label of the most prominent object per image, and then compare their multilabel performance on test data to that of multilabel methods trained with full annotation. Surprisingly, we observe a gap of just above mAP on Pascal VOC between the best multiclass and multilabel methods.

We release our implementation of SDCA-based solvers for training models with the loss functions considered in this work. We also publish code for the corresponding proximal maps, which may be of independent interest.

## 2 Related Work

In this section, we place our work in a broad context of related research directions. First, we draw connections to the general problem of learning to rank. While it is mainly studied in the context of information search and retrieval, there are clear ties to multiclass and multilabel classification. Second, we briefly review related results on consistency and classification calibration. These form the basis for our theoretical analysis of top- calibration. Next, we focus on the technical side including the optimization method and the algorithms for efficient computation of proximal operators. Finally, we consider multiclass and multilabel image classification, which are the main running examples in this paper.

Learning to rank.

Learning to rank is a supervised learning problem that arises whenever the structure in the output space admits a partial order

[13]. The classic example is ranking in information retrieval (IR), see [14]

for a recent review. There, a feature vector

is computed for every query and every document , and the task is to learn a model that ranks the relevant documents for the given query before the irrelevant ones. Three main approaches are recognized within that framework: the pointwise, the pairwise, and the listwise approach. Pointwise methods cast the problem of predicting document relevance as a regression [15] or a classification [16] problem. Instead, the pairwise approach is focused on predicting the relative order between documents [17, 18, 19]. Finally, the listwise methods attempt to optimize a given performance measure directly on the full list of documents [20, 21, 22], or propose a loss function on the predicted and the ground truth lists [23, 24].

Different from ranking in IR, our main interest in this work is label ranking which generalizes the basic binary classification problem to multiclass, multilabel, and even hierarchical classification, see [25] for a survey. A link between the two settings is established if we consider queries to be examples (images) and documents to be class labels. The main contrast, however, is in the employed loss functions and performance evaluation at test time (§ 3).

Most related to our work is a general family of convex loss functions for ranking and classification introduced by Usunier [26]. One of the loss functions that we consider ( [9]) is a member of that family. Another example is Wsabie [27, 28], which learns a joint embedding model optimizing an approximation of a loss from [26].

Top- classification in our setting is directly related to label ranking as the task is to place the ground truth label in the set of top labels as measured by their prediction scores. An alternative approach is suggested by [29] who use structured learning to aggregate the outputs of pre-trained one-vs-all binary classifiers and directly predict a set of labels, where the labels missing from the annotation are modelled with latent variables. That line of work is pursued further in [30]. The task of predicting a set of items is also considered in [31], who frame it as a problem of maximizing a submodular reward function. A probabilistic model for ranking and top- classification is proposed by [32], while [33, 34] use metric learning to train a nearest neighbor model. An interesting setting related to top- classification is learning with positive and unlabeled data [35, 36], where the absence of a label does not imply it is a negative label, and also learning with label noise [37, 38].

Label ranking is closely related to multilabel classification [11, 39], which we consider later in this paper, and to tag ranking [40]. Ranking objectives have been also considered for training convolutional architectures [41], most notably with a loss on triplets [42, 43], that consideres both positive and negative examples. Many recent works focus on the top of the ranked list [44, 45, 46, 47, 48]. However, they are mainly interested in search and retrieval, where the number of relevant documents by far exceeds what users are willing to consider. That setting suggests a different trade-off for recall and precision compared to our setting with only a few relevant labels. This is correspondingly reflected in performance evaluation, as mentioned above.

Consistency and calibration. Classification is a discrete prediction problem where minimizing the expected (0-1) error is known to be computationally hard. Instead, it is common to minimize a surrogate loss that leads to efficient learning algorithms. An important question, however, is whether the minimizers of the expected surrogate loss also minimize the expected error. Loss functions which have that property are called calibrated or consistent with respect to the given discrete loss. Consistency in binary classification is well understood [49, 50, 51], and significant progress has been made in the analysis of multiclass [52, 53, 54], multilabel [10, 55], and ranking [56, 57, 58] methods. In this work, we investigate calibration of a number of surrogate losses with respect to the top- error, which generalizes previously established results for multiclass methods.

Optimization. To facilitate experimental evaluation of the proposed loss functions, we also implement the corresponding optimization routines. We choose the stochastic dual coordinate ascent (SDCA) framework of [5] for its ease of implementation, strong convergence guarantees, and the possibility to compute certificates of optimality via the duality gap. While [5] describe the general SDCA algorithm that we implement, their analysis is limited to scalar loss functions (both Lipschitz and smooth) with regularization, which is only suitable for binary problems. A more recent work [8] extends the analysis to vector valued smooth (or Lipschitz) functions and general strongly convex regularizers, which is better suited to our multiclass and multilabel loss functions. A detailed comparison of recent coordinate descent algorithms is given in [8, 59].

Following [60] and [8], the main step in the optimization algorithm updates the dual variables by computing a projection or, more generally, the proximal operator [61]. The proximal operators that we consider here can be equivalently expressed as instances of a continuous nonlinear resource allocation problem, which has a long research history, see [62] for a recent survey. Most related to our setting is the Euclidean projection onto the unit simplex or the -ball in , which can be computed approximately via bisection in time [63], or exactly via breakpoint searching [64] and variable fixing [65]. The former can be done in time with a simple implementation based on sorting, or in time with an efficient median finding algorithm. In this work, we choose the variable fixing scheme which does not require sorting and is easy to implement. Although its complexity is on pathological inputs with elements growing exponentially [66], the observed complexity in practice is linear and is competitive with breakpoint searching algorithms [66, 65].

While there exist efficient projection algorithms for optimizing the SVM hinge loss and its descendants, the situation is a bit more complicated for logistic regression, both binary and multiclass. There exists no analytical solution for an update with the logistic loss, and [8] suggest a formula in the binary case which computes an approximate update in closed form. Multiclass logistic (softmax) loss is optimized in the SPAMS toolbox [67], which implements FISTA [68]. Alternative optimization methods are considered in [69] who also propose a two-level coordinate descent method in the multiclass case. Different from these works, we propose to follow closely the same variable fixing scheme that is used for SVM training and use the Lambert function [70] in the resulting entropic proximal map. Our runtime compares favourably with SPAMS, as we show in § 6.2.

Image classification. Multiclass and multilabel image classification are the main applications that we consider in this work to evaluate the proposed loss functions. We employ a relatively simple image recognition pipeline following [71]

, where feature vectors are extracted from a convolutional neural network (ConvNet), such as the VGGNet

[71] or the ResNet [72], and are then used to train a linear classifier with the different loss functions. The ConvNets that we use are pre-trained on the large scale ImageNet [1] dataset, where there is a large number of object categories (

), but relatively little variation in scale and location of the central object. For scene recognition, we also use a VGGNet-like architecture

[73] that was trained on the Places 205 [2] dataset.

Despite the differences between the benchmarks [74], image representations learned by ConvNets on large datasets have been observed to transfer well [75, 76]. We follow that scheme in single-label experiments, when recognizing birds [77] and flowers [78] using a network trained on ImageNet, or when transferring knowledge in scene recognition [79, 4]. However, moving on to multi-label classification on Pascal VOC [12] and Microsoft COCO [3], we need to account for increased variation in scale and object placement.

While the earlier works ignore explicit search for object location [80, 81], or require bounding box annotation [82, 83, 84], recent results indicate that effective classifiers for images with multiple objects in cluttered scenes can be trained from weak image-level annotation by explicitly searching over multiple scales and locations [85, 86, 87, 88, 89]. Our multilabel setup follows closely the pipeline of [87] with a few exceptions detailed in § 6.3.

## 3 Loss Functions for Classification

When choosing a loss function, one may want to consider several aspects. First, at the basic level, the loss function depends on the available annotation and the performance metric one is interested in, we distinguish between (single label) multiclass and multilabel losses in this work. Next, there are two fundamental factors that control the statistical and the computational behavior of learning. For computational reasons, we work with convex surrogate losses rather than with the performance metric directly. In that context, a relevant distinction is between the nonsmooth Lipschitz functions (, ) and the smooth functions (, , ) with strongly convex conjugates that lead to faster convergence rates. From the statistical perspective, it is important to understand if the surrogate loss is classification calibrated as it is an attractive asymptotic property that leads to Bayes consistent classifiers. Finally, one may exploit duality and introduce modifications to the conjugates of existing functions that have desirable effects on the primal loss ().

The rest of this section covers the technical background that is used later in the paper. We discuss our notation, introduce multiclass and multilabel classification, recall the standard approaches to classification, and introduce our recently proposed methods for top- error minimization.

In § 3.1, we discuss multiclass and multilabel performance evaluation measures that are used later in our experiments. In § 3.2, we review established multiclass approaches and introduce our novel top- loss functions; we also recall Moreau-Yosida regularization as a smoothing technique and compute convex conjugates for SDCA optimization. In § 3.3, we discuss multilabel classification methods, introduce the smooth multilabel SVM, and compute the corresponding convex conjugates. To enhance readability, we defer all the proofs to the appendix.

Notation. We consider classification problems with a predefined set of classes. We begin with multiclass classification, where every example has exactly one label , and later generalize to the multilabel setting, where each example is associated with a set of labels . In this work, a classifier is a function that induces a ranking of class labels via the prediction scores . In the linear case, each predictor has the form , where is the parameter to be learned. We stack the individual parameters into a weight matrix , so that . While we focus on linear classifiers with in the exposition below and in most of our experiments, all loss functions are formulated in the general setting where the kernel trick [90] can be employed to construct nonlinear decision surfaces. In fact, we have a number of experiments with the RBF kernel as well.

At test time, prediction depends on the evaluation metric and generally involves sorting / producing the top-

highest scoring class labels in the multiclass setting, and predicting the labels that score above a certain threshold in multilabel classification. We come back to performance metrics shortly.

We use and to denote permutations of (indexes) . Unless stated otherwise, reorders components of a vector in descending order, . Therefore, for example, . If necessary, we make it clear which vector is being sorted by writing to mean and let . We also use the Iverson bracket defined as if is true and

otherwise; and introduce a shorthand for the conditional probability

. Finally, we let be obtained by removing the -th coordinate from .

We consider -regularized objectives in this work, so that if is a multiclass loss and is a regularization parameter, classifier training amounts to solving Binary and multilabel classification problems only differ in the loss .

### 3.1 Performance Metrics

Here, we briefly review performance evaluation metrics employed in multiclass and multilabel classification.

Multiclass. A standard performance measure for classification problems is the zero-one loss, which simply counts the number of classification mistakes [91, 92]. While that metric is well understood and inspired such popular surrogate losses as the SVM hinge loss, it naturally becomes more stringent as the number of classes increases. An alternative to the standard zero-one error is to allow guesses instead of one. Formally, the top- zero-one loss (top- error) is

 \errk(y,f(x))\bydef\ivfπk(x)>fy(x). (1)

That is, we count a mistake if the ground truth label scores below other class labels. Note that for we recover the standard zero-one error. Top- accuracy is defined as minus the top- error, and performance on the full test sample is computed as the mean across all test examples.

Multilabel.

Several groups of multilabel evaluation metrics are established in the literature and it is generally suggested that multiple contrasting measures should be reported to avoid skewed results. Here, we give a brief overview of the metrics that we report and refer the interested reader to

[11, 39, 55], where multilabel metrics are discussed in more detail.

Ranking based. This group of performance measures compares the ranking of the labels induced by to the ground truth ranking. We report the rank loss defined as

 RLoss(f)=1n\tsumni=1\absDi/(\absYi\abs¯Yi),

where

is the set of reversely ordered pairs, and

is the complement of . This is the loss that is implicitly optimized by all multiclass / multilabel loss functions that we consider since they induce a penalty when .

Ranking class labels for a given image is similar to ranking documents for a user query in information retrieval [14]. While there are many established metrics [93], a popular measure that is relevant to our discussion is precision-at- (), which is the fraction of relevant items within the top retrieved [94, 95]. Although this measure makes perfect sense when , there are many more relevant documents than we possibly want to examine, it is not very useful when there are only a few correct labels per image – once all the relevant labels are in the top list, starts to decrease as increases. A better alternative in our multilabel setting is a complementary measure, recall-at-, defined as

 R@k(f)=1n\tsumni=1(π1:k(f(xi))∩\absYi)/\absYi,

which measures the fraction of relevant labels in the top list. Note that is a natural generalization of the top- error to the multilabel setting and coincides with that multiclass metric whenever is singleton.

Finally, we report the standard Pascal VOC [12] performance measure, mean average precision (mAP), which is computed as the one-vs-all AP averaged over all classes.

Partition based. In contrast to ranking evaluation, partition based measures assess the quality of the actual multilabel prediction which requires a cut-off threshold . Several threshold selection strategies have been proposed in the literature: (i) setting a constant threshold prior to experiments [96]; (ii) selecting a threshold a posteriori by matching label cardinality [97]; (iii) tuning the threshold on a validation set [55, 98]; (iv) learning a regression function [99]; (v) bypassing threshold selection altogether by introducing a (dummy) calibration label [100]. We have experimented with options (ii) and (iii), as discussed in § 6.3.

Let be the set of predicted labels for a given threshold , and let

 \tpi,j =\ivj∈h(xi),j∈Yi, \tni,j =\ivj∉h(xi),j∉Yi, \fpi,j =\ivj∈h(xi),j∉Yi, \fni,j =\ivj∉h(xi),j∈Yi,

be a set of primitives defined as in [55]. Now, one can use any performance measure

that is based on the binary confusion matrix, but, depending on where the averaging occurs, the following three groups of metrics are recognized.

Instance-averaging. The binary metrics are computed on the averages over labels and then averaged across examples:

 Ψinst(h)=1n\tsumni=1Ψ(1m\tsummj=1\tpi,j,…,1m\tsummj=1\fni,j).

Macro-averaging. The metrics are averaged across labels:

 Ψmac(h)=1m\tsummj=1Ψ(1n\tsumni=1\tpi,j,…,1n\tsumni=1\fni,j).

Micro-averaging. The metric is applied on the averages over both labels and examples:

 Ψmic(h)=Ψ(1mn\tsumi,j\tpi,j,…,1mn\tsumi,j\fni,j).

Following [11], we consider the score as the binary metric with all three types of averaging. We also report multilabel accuracy, subset accuracy, and the hamming loss defined respectively as

 Acc(h) =1n\tsumni=1(\absh(xi)∩Yi)/(\absh(xi)∪Yi), SAcc(h) =1n\tsumni=1\ivh(xi)=Yi, HLoss(h) =1mn\tsumni=1\absh(xi)△Yi,

where is the symmetric set difference.

### 3.2 Multiclass Methods

In this section, we switch from performance evaluation at test time to how the quality of a classifier is measured during training. In particular, we introduce the loss functions used in established multiclass methods as well as our novel loss functions for optimizing the top- error (1).

OVA. A multiclass problem is often solved using the one-vs-all (OVA) reduction to independent binary classification problems. Every class is trained versus the rest which yields classifiers . Typically, each classifier is trained with a convex margin-based loss function , where , . Simplifying the notation, we consider

 L(yf(x)) =max{0,1−yf(x)}, (SVMOVA) L(yf(x)) =log(1+e−yf(x)). (LROVA)

The hinge () and logistic () losses correspond to the SVM and logistic regression methods respectively.

Multiclass. An alternative to the OVA scheme above is to use a multiclass loss directly. All multiclass losses that we consider only depend on pairwise differences between the ground truth score and all the other scores . Loss functions from the SVM family additionally require a margin , which can be interpreted as a distance in the label space [13] between and . To simplify the notation, we use vectors (for the differences) and (for the margin) defined for a given pair as

 aj\bydeffj(x)−fy(x),cj\bydef1−\ivy=j,j=1,…,m.

We also write instead of the full .

We consider two generalizations of and :

 L(a)=maxj∈\Yc{aj+cj}, (SVMMulti) L(a)=log(\tsumj∈\Ycexp(aj)). (LRMulti)

Both the multiclass SVM loss () of [101] and the softmax loss () are common in multiclass problems. The latter is particularly popular in deep architectures [102, 103, 71], while is also competitive in large-scale image classification [104].

The OVA and multiclass methods were designed with the goal of minimizing the standard zero-one loss. Now, if we consider the top- error (1) which does not penalize mistakes, we discover that convexity of the above losses leads to phenomena where , but . That happens, for example, when , and creates a bias if we are working with rigid function classes such as linear classifiers. Next, we introduce loss functions that are modifications of the above losses with the goal of alleviating that phenomenon.

Top- SVM. Recently, we introduced Top- Multiclass SVM [9], where two modifications of the multiclass hinge loss () were proposed. The first version () is motivated directly by the top- error while the second version () falls into a general family of ranking losses introduced earlier by Usunier  [26]. The two top- SVM losses are

 L(a) =max{0,1k\tsumkj=1(a+c)πj}, (top\mhyphenk SVMα) L(a) =1k\tsumkj=1max{0,(a+c)πj}, (top\mhyphenk SVMβ)

where reorders the components of in descending order. We show in [9] that offers a tighter upper bound on the top- error than . However, both losses perform similarly in our experiments with only a small advantage of in some settings. Therefore, when the distinction is not important, we simply refer to them as the top- hinge or the top- SVM loss. Note that they both reduce to for .

Top- SVM losses are not smooth which has implications for their optimization (§ 5) and top- calibration (§ 4.1). Following [8], who employed Moreau-Yosida regularization [105, 106] to obtain a smoothed version of the binary hinge loss (), we applied the same technique in [6] and introduced smooth top- SVM.

Moreau-Yosida regularization. We follow [61] and give the main points here for completeness. The Moreau envelope or Moreau-Yosida regularization of the function is

 Mf(v)\bydefinfx(f(x)+(1/2)\normsx−v2).

It is a smoothed or regularized form of with the following nice properties: it is continuously differentiable on , even if is not, and the sets of minimizers of and are the same222 That does not imply that we get the same classifiers since we are minimizing a regularized sum of individually smoothed loss terms.. To compute a smoothed top- hinge loss, we use

 Mf=(f∗+(1/2)\norms⋅2)∗,

where is the convex conjugate333 The convex conjugate of is . of . A classical result in convex analysis [107] states that a conjugate of a strongly convex function has Lipschitz smooth gradient, therefore, is indeed a smooth function.

Top- hinge conjugate. Here, we compute the conjugates of the top- hinge losses and . As we show in [9], their effective domains444 The effective domain of is . are given by the top- simplex ( and respectively) of radius defined as

 Δαk(r) \bydef{x\given\inner\ones,x≤r,0≤xi≤1k\inner\ones,x,∀i}, (2) Δβk(r) \bydef{x\given\inner\ones,x≤r,0≤xi≤1kr,∀i}. (3)

We let , , and note the relation where is the unit simplex and the inclusions are proper for , while for all three sets coincide.

[[9]] The convex conjugate of is

 L∗(v) ={−\tsumj≠yvjif \inner\ones,v=0 and \wovy∈Δαk,+∞otherwise.

The conjugate of is defined in the same way, but with the set instead of .

Note that the conjugates of both top- SVM losses coincide and are equal to the conjugate of the loss with the exception of their effective domains, which are , , and respectively. As becomes evident in § 5, the effective domain of the conjugate is the feasible set for the dual variables. Therefore, as we move from to , to , we introduce more and more constraints on the dual variables thus limiting the extent to which a single training example can influence the classifier.

Smooth top- SVM. We apply the smoothing technique introduced above to . Smoothing of is done similarly, but the set is replaced with .

Let be the smoothing parameter. The smooth top- hinge loss () and its conjugate are

 Lγ(a) =1γ(\innern\wo(a+c)y,p−12\normnp2), (top\mhyphenk SVMαγ) L∗γ(v) ={γ2\normn\wovy2−\innern\wovy,\wocyif \inner\ones,v=0,\wovy∈Δαk,+∞otherwise,

where is the Euclidean projection of onto . Moreover, is -smooth.

While there is no analytic formula for the loss, it can be computed efficiently via the projection onto the top- simplex [9]. We can also compute its gradient as

 ∇Lγ(a) =(1/γ)(\Idy−ey\ones⊤y)\projΔαk(γ)\wo(a+c)y,

where

is the identity matrix w/o the

-th column, is the -th standard basis vector, and is the -dimensional vector of all ones. This follows from the definition of , the fact that can be written as for and , and a known result [108] which says that for any closed convex set .

Smooth multiclass SVM (). We also highlight an important special case of that performed remarkably well in our experiments. It is a smoothed version of and is obtained with and .

Softmax conjugate. Before we introduce a top- version of the softmax loss (), we need to recall its conjugate.

The convex conjugate of the loss is

 L∗(v) =⎧⎪⎨⎪⎩∑j≠yvjlogvj+(1+vy)log(1+vy),if \inner\ones,v=0 and \wovy∈Δ,+∞otherwise, (4)

where is the unit simplex.

Note that the conjugates of both the and the losses share the same effective domain, the unit simplex , and differ only in their functional form: a linear function for and a negative entropy for . While we motivated top- SVM directly from the top- error, we see that the only change compared to was in the effective domain of the conjugate loss. This suggests a general way to construct novel losses with specific properties by taking the conjugate of an existing loss function, and modifying its effective domain in a way that enforces the desired properties. The motivation for doing so comes from the interpretation of the dual variables as forces with which every training example pushes the decision surface in the direction given by the ground truth label. Therefore, by reducing the feasible set we can limit the maximal contribution of any given training example.

Top- entropy. As hinted above, we first construct the conjugate of the top- entropy loss () by taking the conjugate of and replacing in (4) with , and then take the conjugate again to obtain the primal loss . A version can be constructed using the set instead.

The top- entropy loss is defined as

 L(a)=max{\innern\woay,x−(1−s)log(1−s)−\innerx,logx\givenx∈Δαk,\inner\ones,x=s}. (top\mhyphenk Ent)

Moreover, we recover the loss when .

While there is no closed-form solution for the loss when , we can compute and optimize it efficiently as we discuss later in § 5.

Truncated top- entropy. A major limitation of the softmax loss for top- error optimization is that it cannot ignore the highest scoring predictions. This can lead to a situation where the loss is high even though the top- error is zero. To see that, let us rewrite the loss as

 L(y,f(x))=log(1+\tsumj≠yexp(fj(x)−fy(x))). (5)

If there is only a single such that , then even though is zero.

This problem is also present in all top- hinge losses considered above and is an inherent limitation due to their convexity. The origin of the problem is the fact that ranking based losses [26] are based on functions such as

 ϕ(f(x))=(1/m)\tsumj∈\Ycαjfπj(x)−fy(x).

The function is convex if the sequence is monotonically non-increasing [109]. This implies that convex ranking based losses have to put more weight on the highest scoring classifiers, while we would like to put less weight on them. To that end, we drop the first highest scoring predictions from the sum in (5), sacrificing convexity of the loss, and define the truncated top- entropy loss as follows

 L(a)=log(1+\tsumj∈\Jckyexp(aj)), (top\mhyphenk Enttr)

where are the indexes corresponding to the smallest components of . This loss can be seen as a smooth version of the top- error (1), as it is small whenever the top- error is zero. We show a synthetic experiment in § 6.1, where the advantage of discarding the highest scoring classifier in becomes apparent.

### 3.3 Multilabel Methods

In this section, we introduce natural extensions of the classic multiclass methods discussed above to the setting where there is a set of ground truth labels for each example . We focus on the loss functions that produce a ranking of labels and optimize a multilabel loss . We let and use a simplified notation . A more complete overview of multilabel classification methods is given in [110, 11, 39].

Binary relevance (BR). Binary relevance is the standard one-vs-all scheme applied to multilabel classification. It is the default baseline for direct multilabel methods as it does not consider possible correlations between the labels.

Multilabel SVM. We follow the line of work by [7] and consider the Multilabel SVM loss below:

 L(u) =maxy∈Ymaxj∈¯Ymax{0,1+uj−uy} (SVMML) =max{0,1+maxj∈¯Yuj−miny∈Yuy}.

This method is also known as the

multiclass multilabel perceptron

(MMP) [100] and the separation ranking loss [111]. It can be contrasted with another extension, the RankSVM of Elisseeff and Weston [99], which optimizes the pairwise ranking loss:

 1\absYi\abs¯Yi\tsum(y,j)∈Y×¯Ymax{0,1+uj−uy}.

Note that both the that we consider and RankSVM avoid expensive enumeration of all the possible labellings by considering only pairwise label ranking. A principled large margin approach that accounts for all possible label interactions is structured output prediction [13].

Multilabel SVM conjugate. Here, we compute the convex conjugate of the loss which is used later to define a Smooth Multilabel SVM. Note that the loss depends on the partitioning of into and for every given pair. This is reflected in the definition of a set below, which is the effective domain of the conjugate:

 SY\bydef{x\given−\tsumy∈Yxy=\tsumj∈¯Yxj≤1,xy≤0,xj≥0}.

In the multiclass setting, the set is singleton, therefore

has no degrees of freedom and we recover the unit simplex

over , as in (4). In the true multilabel setting, on the other hand, there is freedom to distribute the weight across all the classes in .

The convex conjugate of the loss is

 L∗(v)=−\tsumj∈¯Yvj,ifv∈SY,+∞,otherwise. (6)

Note that when , (6) naturally reduces to the conjugate of given in Proposition 3.2 with .

Smooth multilabel SVM. Here, we apply the smoothing technique, which worked very well for multiclass problems [8, 6], to the multilabel loss.

As with the smooth top- SVM, there is no analytic formula for the smoothed loss. However, we can both compute and optimize it within our framework by solving the Euclidean projection problem onto what we call a bipartite simplex. It is a convenient modification of the set above:

 B(r)\bydef{(x,y)\given\inner\ones,x=\inner\ones,y≤r,x∈\Rbm+,y∈\Rbn+}. (7)

Let be the smoothing parameter. The smooth multilabel SVM loss and its conjugate are

 Lγ(u) =1γ(\innerb,p−12\normsp+\inner¯b,¯p−12\norms¯p), (SVMMLγ) L∗γ(v) ={12(∑y∈Yvy−∑j∈¯Yvj)+γ2\normsv,v∈SY,+∞,o/w,

where is the projection onto of , . is -smooth.

Note that the smooth loss is a nice generalization of the smooth multiclass loss and we naturally recover the latter when is singleton. In § 5, we extend the variable fixing algorithm of [65] and obtain an efficient method to compute Euclidean projections onto .

Multilabel cross-entropy. Here, we discuss an extension of the  loss to multilabel learning. We use the softmax function to model the distribution over the class labels , which recovers the well-known multinomial logistic regression [112] and the maximum entropy [69] models.

Assume that all the classes given in the ground truth set are equally likely. We define an empirical distribution for a given pair as , and model the conditional probability via the softmax:

 py(x)=(expuy)/(\tsumj∈\Ycexpuj),∀y∈\Yc.

The cross-entropy of the distributions and is given by

 H(^p,p(x)) =−1\absY∑y∈Ylog(expuy∑jexpuj),

and the corresponding multilabel cross entropy loss is:

 L(u) =1\absY\tsumy∈Ylog(\tsumj∈\Ycexp(uj−uy)). (LRML)

Multilabel cross-entropy conjugate. Next, we compute the convex conjugate of the loss, which is used later in our optimization framework.

The convex conjugate of the loss is

 L∗(v) =⎧⎪⎨⎪⎩\tsumy∈Y(vy+1k)log(vy+1k)+\tsumj∈¯Yvjlogvj,if v∈DY,+∞otherwise. (8)

where and is the effective domain defined as:

 DY\bydef{v\given \tsumy∈Y(vy+1k)+\tsumj∈¯Yvj=1, vy+1k≥0,vj≥0,y∈Y,j∈¯Y}.

The conjugates of the multilabel losses and no longer share the same effective domain, which was the case for multiclass losses. However, we still recover the conjugate of the loss when is singleton.

## 4 Bayes Optimality and Top-k Calibration

This section is devoted to the theoretical analysis of multiclass losses in terms of their top- performance. We establish the best top- error in the Bayes sense, determine when a classifier achieves it, define the notion of top- calibration, and investigate which loss functions possess this property.

Bayes optimality. Recall that the Bayes optimal zero-one loss in binary classification is simply the probability of the least likely class [91]. Here, we extend this notion to the top- error (1) introduced in § 3.1 for multiclass classification and provide a description of top- Bayes optimal classifier.

The Bayes optimal top- error at is

 ming∈\Rbm\ExpY\givenX[\errk(Y,g)\givenX=x]=1−\tsumk