The PAV algorithm optimizes binary proper scoring rules

04/08/2013 ∙ by Niko Brümmer, et al. ∙ Stellenbosch University 0

There has been much recent interest in application of the pool-adjacent-violators (PAV) algorithm for the purpose of calibrating the probabilistic outputs of automatic pattern recognition and machine learning algorithms. Special cost functions, known as proper scoring rules form natural objective functions to judge the goodness of such calibration. We show that for binary pattern classifiers, the non-parametric optimization of calibration, subject to a monotonicity constraint, can be solved by PAV and that this solution is optimal for all regular binary proper scoring rules. This extends previous results which were limited to convex binary proper scoring rules. We further show that this result holds not only for calibration of probabilities, but also for calibration of log-likelihood-ratios, in which case optimality holds independently of the prior probabilities of the pattern classes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

There has been much recent interest in using the pool-adjacent-violators111a.k.a pair-adjacent-violators (PAV) algorithm for the purpose of calibration of the outputs of machine learning or pattern recognition systems [31, 7, 24, 30, 17, 15]. Our contribution is to point out and prove some previously unpublished results concerning the optimality of using the PAV algorithm for such calibration.

In the rest of the introduction, §1.1 defines calibration; §1.2 introduces regular binary proper scoring rules, the class of objective functions which we use to judge the goodness of calibration; and §1.3 gives more specific details of how this calibration problem forms the non-parametric, monotonic optimization problem which is the subject of this paper.

The rest of the paper is organized as follows: In §2 we state the main optimization problem under discussion; §3 summarizes previous work related to this problem; §4, the bulk of this paper, presents our proof that PAV solves this problem; and finally §5 shows that the PAV can be adapted to a closely related calibration problem, which has the goal of assigning calibrated log-likelihood-ratios, rather than probabilities. We conclude in §6 with a short discussion about applying PAV calibration in pattern recognition.

The results of this paper can be summarized as follows: The PAV algorithm, when used for supervised, monotonic, non-parametric calibration is (i) optimal for all regular binary proper scoring rules and is moreover (ii) optimal at any prior when calibrating log-likelihood-ratios.

1.1 Calibration

In this paper, we are interested in the calibration of binary pattern classification systems which are designed to discriminate between two classes, by outputting a scalar confidence score222The reader is cautioned not to confuse score as defined here, with proper scoring rule as defined in the next subsection.. Let denote a to-be-classified input pattern333The nature of is unimportant here, it can be an image, a sound recording, a text document etc., which is known to belong to one of two classes: the target class , or the non-target class . The pattern classifier under consideration performs a mapping , where is a real number, which we call the uncalibrated confidence score. The only assumption that we make about is that it has the following sense: The greater the score, the more it favours the target class—and the smaller, the more it favours the non-target class.

In order for the pattern classifier output to be more generally useful, it can be processed through a calibration transformation. We assume here that the calibrated output will be used to make a minimum-expected-cost Bayes decision [12, 29]. This requires that the score be transformed to act as posterior probability for the target class, given the score. We denote the transform of the uncalibrated score to calibrated target posterior thus: . In the first (and largest) part of this paper, we consider this calibration transformation as an atomic step and show in what sense the PAV algorithm is optimal for this transformation.

In most machine-learning contexts, it is assumed that the object of calibration is (as discussed above) to assign posterior probabilities [26, 31, 24]. However, the calibration of log-likelihood-ratios may be more appropriate in some pattern recognition fields such as automatic speaker recognition [14, 7]. This is important in particular for forensic speaker recognition, in cases where a Bayesian framework is used to represent the weight of the speech evidence in likelihood-ratio form [17]. With this purpose in mind, in §5, we decompose the transformation into two consecutive steps, thus: , where the intermediate quantity is known as the log-likelihood-ratio for the target, relative to the non-target. The first stage, , is now the calibration transform and it is performed by an adapted PAV algorithm (denoted PAV-LLR), while the second stage, , is just standard application of Bayes’ rule. One of the advantages of this decomposition is that the log-likelihood-ratio is independent of , the prior probability for the target class—and that therefore the pattern classifier (which does ) and the calibrator (which does ) can both be independent of the prior. The target prior need only be available for the final step of applying Bayes’ rule. Our important contribution here is to show that the PAV-LLR calibration is optimal independently of the prior .

1.2 Regular Binary Proper Scoring Rules

We have introduced calibration as a tool to map uncalibrated scores to posterior probabilities, which may then be used to make minimum-expected-cost Bayes decisions. We next ask how the quality of a given calibrator may be judged. Since the stated purpose of calibration is to make cost-effective decisions, the goodness of calibration may indeed be judged by decision cost. For this purpose, we consider a class of special cost functions known as proper scoring rules to quantify the cost-effective decision-making ability of posterior probabilities, see e.g. [18, 12, 13, 11, 9, 16], or our previous work [7]. Since this paper is focused on the PAV algorithm, a detailed introduction to proper scoring rules is out of scope. Here we just need to define the class of regular binary proper scoring rules in a way that is convenient to our purposes. (Appendix A gives some notes to link this definition to previous work.)

We define a regular binary proper scoring rule (RBPSR) to be a function, , such that

(1)

for which the following conditions must hold:

These integrals exist and are finite, except444This exception accommodates cases like the logarithmic scoring rule, which is obtained at , see [11, 16]. possibly for and , which may assume the value .

is a probability distribution

555It is easily shown that if cannot be normalized (i.e. ), then one or both of or must also be infinite for every value of , so that a useful proper scoring rule is not obtained. over , i.e. for , and . In other words the RBPSR’s are a family of functions parametrized by . If almost everywhere, then the RBPSR is denoted strict, otherwise it is non-strict. We list some examples, which will be relevant later:

If , where denotes Dirac-delta, then represents the misclassification cost of making binary decisions by comparing probability to a threshold of . Note that this proper scoring rule is non-strict. Moreover it is discontinuous and therefore not convex as a function of . This is but one example of many non-convex proper scoring rules. A more general example is obtained by convex combination666The and sum to . of multiple Dirac-deltas: .

If , then is the (strict) quadratic777In this context the average of the Brier proper scoring is just a mean-squared-error. proper scoring rule, also known as the Brier scoring rule [6].

If , then is the (strict) logarithmic scoring rule, originally proposed by [18]. The salient property of a binary proper scoring rule is that for any , its expectations w.r.t are minimized at , so that: . For a strict RBPSR, this minimum is unique. We show below in lemma 12 how this property derives from (1).

1.3 Supervised, monotonic, non-parametric calibration

We have thus far established that we want to find a calibration method to map scores to probabilities and that we then want to judge the goodness of these probabilities via RBPSR. We can now be more specific about the calibration problem that is optimally solvable by PAV:

Firstly, we constrain the calibration transformation to be a monotonic non-decreasing function: . This is to preserve the above-defined sense of the score . This monotonicity constraint is discussed further in §6. See also [7, 31, 24, 17].

Secondly, we assume that we are given a finite number, , of trials, for each of which the to-be-calibrated pattern classifier has produced a score. We denote these scores . We need only to map each of these scores to a probability. In other words, we do not have to find the calibration function itself, we only have to non-parametrically assign the function output values , while respecting the above monotonicity constraint. To simplify notation, we assume without loss of generality, that . (In practice one has to sort the scores to make it so.) This now means that monotonicity is satisfied if . Notice that the input scores now only serve to define the order. Once this order is fixed, one does not need to refer back to the scores. The output probabilities can now be independently assigned, as long as they respect the above chain of inequalities.

Finally, we assume that the problem is supervised: For every one of the trials the true class is known and is denoted: . This allows evaluation of the RBPSR for every trial as . A weighted combination of the RBPSR costs for every trial can now be used as the objective function which needs to be be minimized. In summary the problem which is solved by PAV is that of finding , subject to the monotonicity constraints, so that the RBPSR objective is minimized. This problem is succinctly restated in the following section:

2 Main optimization problem statement

The problem of interest may be stated as follows:

We are given as input:

A sequence of indices, denoted with a corresponding sequence of labels .

A pair of positive weights, .

We use the notation to assign a weight to every index, by letting and .

The problem is now to find the sequence of probabilities, denoted , which minimizes the following objective:

(2)

subject to the monotonicity constraint:

(3)

We require the solution to hold (be a feasible minimum) simultaneously for every RBPSR . We already know that if such a solution exists, it must be unique, because the original PAV algorithm as published in [4] in 1955, was shown to give a unique optimal solution for the special case of , for which . See theorem 4.1 and corollary 4.1 below for details.

3 Relationship of our proof to previous work

Although not stated explicitly in terms of a proper scoring rule, the first publication of the PAV algorithm [4], was already proof that it optimized the logarithmic proper scoring rule. It is also known that PAV optimizes the quadratic (Brier) scoring rule [31], and indeed that it optimizes combinations of more general convex functions [5, 2]. However as pointed out above, there are proper scoring rules that are not convex.

In our previous work [7], where we made use of calibration with the PAV algorithm, we did mention the same results presented here, but without proof. This paper therefore complements that work, by providing proofs.

We also note that independently, in [15], it was stated “it can be proved that the same [PAV algorithm] is obtained when using any proper scoring function”, but this was also without proof or further references888Notes to reviewers: Note 1: We contacted Fawcet and Niculescu-Mizil to ask if they had a proof. They replied that their statement was based on the assumption that proper scoring rules are convex, which by [5] is then optimized by PAV. Since we include here also non-convex proper scoring rules, our results are more general. Note 2: The paper [28] has the word ‘quasi-convex’ in the title and employs the PAV algorithm for a solution. This could suggest that our problem was solved in that paper, but a different problem was solved there, namely: “the approximation problem of fitting n data points by a quasi-convex function using the least squares distance function.”.

We construct a proof that the PAV algorithm solves the problem as stated in §2, by roughly following the pattern of the unpublished document [1], where the optimality of PAV was proved for the case of strictly convex cost functions. That proof is not applicable as is for our purposes, because as pointed out above, some RBPSR’s are not convex. We will show however in lemma 12 below, that all RBPSR’s and their expectations are quasiconvex and that the proof can be based on this quasiconvexity, rather than on convexity. Note that when working with convex cost functions, one can use the fact that positively weighted combinations of convex functions are also convex, but this is not true in general for quasiconvex functions. For our case it was therefore necessary to prove explicitly that expectations of RBPSR’s are also quasiconvex. A further complication that we needed to address was that non-strict RBPSR’s lead to unidirectional implications, in places where the strictly convex cost functions of the proof in [1] gave if and only if relationships.

Finally, we note that although the more general case of PAV for non-strict convex cost functions was treated in [5], we could not base our proof on theirs, because they used properties of convex functions, such as subgradients, which are not applicable to our quasiconvex RBPSR’s.

4 Proof of optimality of PAV

This section forms the bulk of this paper and is dedicated to prove that a version of the PAV algorithm solves the optimization problem stated in §2.

Figure 1: Proof structure: PAV is optimal for all RBPSR’s and PAV-LLR is optimal for all RBPSR’s and priors.

See figure 1 for a roadmap of the proof: Theorem 4.1 and corollary 4.1 give the closed-form solution for the logarithmic RBPSR. For the PAV algorithm, we use corollary 4.1 just to show that there is a unique solution, but we re-use it later to prove the prior-independence of the PAV-LLR algorithm. Inside the dashed box, theorem 4.2 shows how multiple optimal subproblem solutions can constitute the optimal solution to the whole problem. Theorems 4.3 and 4.4 respectively show how to find and combine optimal subproblem solutions, so that the PAV algorithm can use them to meet the requirements of theorem 4.2.

4.1 Unique solution

In this section, we use the work of Ayer et al, reproduced here as theorem 4.1, to show via corollary 4.1 that, if our problem does have a solution for every RBPSR, then it must be unique, because the special case of the logarithmic scoring rule (when ) does have a unique solution. [Ayer et al., 1955] Given non-negative real numbers , such that for every , the maximization of the objective , subject to the monotonicity constraint (3), has the unique solution, , where:

(4)

where

(5)

See999Available online (with open access) at http://projecteuclid.org/euclid.aoms/1177728423. [4], theorem 2.2 and its corollary 2.1. In that work, the monotonicity constraint was non-increasing, rather than the non-decreasing constraint (3) that we use here. The solution that they give therefore has to be transformed by letting the index go in reverse order, which means exchanging the roles of the subsequence endpoints , which then has the result of exchanging the roles of and in the solution.

We now show that this theorem supplies the solution for the special case of the logarithmic RBPSR: If , then the problem of minimizing objective (2), subject to constraint (3), has the unique solution, , where:

(6)

where

(7)

where is the number of -labels and the number of -labels in subsequence . Observe that if we let

then , so that , so that the constrained maximization of theorem 4.1 and the constrained minimization of this corollary have the same solution.

This corollary gives a closed-form solution, (6), to the problem, and from [4] we know that this is the same solution which is calculated by the iterative PAV algorithm101010The PAV algorithm, if efficiently implemented, is known [25, 2, 30] to have linear computational load (of order ), which is superior to a straight-forward implementation of the explicit form (6).. As noted above, it has so far [4, 1, 5] only been shown that this solution is valid for logarithmic and other RBPSR’s which have convex expectations. In the following sections we show that this solution is also optimal for all other RBPSR’s.

4.2 Decomposition into subproblems

We need to consider subsequences of : For any , we denote as the subsequence of which starts at index and ends at index . We may compute a partial objective function over a subsequence as:

(8)

where . We can now define the subproblem as the problem of minimizing , simultaneosly for every RBPSR, and subject to the monotonicity constraint . In what follows, we shall use the following notational conventions:

The subproblem is equivalent to the original problem.

We shall denote a subproblem solution, , as feasible when the monotonicity constraint is met and non-feasible otherwise.

By subproblem solution we mean just a sequence , feasible or not, such that .

Since any subproblem is isomorphic to the original problem, corollary 4.1 also shows that if111111The object of this whole exercise is to prove that the optimal solution exists for every subproblem and is given by the PAV algorithm, but until we have proved this, we cannot assume that the optimal solution exists for every subproblem. it has a feasible minimizing solution for every RBPSR, then that solution must be unique. Hence, by the optimal subproblem solution, we mean the unique feasible solution that minimizes , for every RBPSR.

By a partitioning of the problem into a set, , of adjacent, non-overlapping subproblems, we mean that every index occurs exactly once in all of the subproblems, so that:

(9)

Our first important step is to show with theorem 4.2, proved via lemmas 9 and 4.2, how the optimal total solution may be constituted from optimal subproblem solutions: For a given RBPSR and for a given partitioning, , of into subproblems, let:

be a feasible solution to the whole problem, with minimum total objective ; and

for every subproblem , let denote a feasible subproblem solution with minimum partial objective ; and

denote the concatenation of all the subproblem solutions , in order, to form a (not necessarily feasible) solution to the whole problem , then

(10)

Follows by recalling (9) and by noting that for every , , because (except at and ) minimization of the RHS is subject to the extra constraints and .

For a given RBPSR and for a given partitioning, , of into subproblems, let be a feasible solution to the whole problem, with minimum total objective ; and let be any feasible solution to the whole problem, with total objective . Then

(11)

Follows directly from (9) and the premise.

Let be a feasible solution for and let be a partitioning of into subproblems, such that for every , the subsequence is the optimal solution to subproblem , then is the optimal solution to the whole problem . The premises make lemmas 9 and 4.2 applicable, for every RBPSR. Since both inequalities (10) and (11) are satisfied, , where is an optimal solution for each RBPSR. Hence is optimal for every RBPSR and is by corollary 4.1 the unique optimal solution.

4.3 Constant subproblem solutions

In what follows constant subproblem solutions will be of central importance. A solution is constant if , for some . In this case, we use the short-hand notation to denote the subproblem objective, and this may be expressed as:

(12)

where is the number of -labels and the number of -labels. Note:

A constant subproblem solution is always feasible.

If it exists, the optimal solution to an arbitrary subproblem may or may not be constant. Whether optimal or not, it is important to examine the behaviour of subproblem solutions that are constrained to be constant. This behaviour is governed by the quasiconvex121212A real-valued function , defined on a real interval is quasiconvex, if every sublevel set of the form is convex (i.e. a real interval) [3]. Lemma 12 shows that is quasiconvex. properties of as summarized in the following lemma: Let , where is the number of -labels and the number of -labels in the subsequence , and let be the objective for the constant subproblem solution, , then the following properties hold, where is any RBPSR, and where we also note the specialization for strict RBPSR’s:

If , then .

strict case:

If , then .

If , then .

strict case:

If , then .

,

strict case:

is the unique minimum.

(This is the salient property of binary proper scoring rules, which was mentioned above.)

For convenience in this proof, we drop the subscripts , letting . The expected value of w.r.t. probability is:

(13)

Clearly, if the above properties hold for , then they will also hold for . We prove these properties for by letting and by examining the sign of : If , then . If , then (1) gives:

(14)

The non-strict versions of properties 1,2 and 3 now follow from the following observation: Since for , the sign of the integrand and therefore of depends solely on the sign of , giving:

, if .

, if . If more specifically, almost everywhere, then for any , we have . In this case, the RBPSR is denoted strict and we have:

, if .

, if . which concludes the proof also for the strict cases.

For now, we need only property 4.3 to proceed. We use the other properties later. The optimal constant subproblem solution is characterized in the following theorem: If the optimal solution to subproblem is constant, then:

The constant is .

For any index , such that , the following are both true:

where and are defined in a similar way to , but for the subproblems and .

Property 4.3 of this theorem follows directly from property 4.3 of lemma 12. To prove property 4.3, we use contradiction: If the negation of 2(i) were true, namely , then the non-constant solution would be feasible and (by property 4.3 of lemma 12) would have lower objective, namely , for any strict RBPSR, than that of the constant solution, namely . This contradicts the premise that the optimal solution is constant, so that 4.3(i) must be true. Property 2(ii) is proved by a similar contradiction.

4.4 Pooling adjacent constant solutions

This section shows (using lemmas 4.4 and 4.4 to prove theorem 4.4) when and how optimal constant subproblem solutions may be assembled by pooling smaller adjacent constant solutions: Given a subproblem , for which the optimal solution is constant (at ), we can form the augmented subproblem, with the additional constraint that the solution at must satisfy , for some such that . That is, the solution to the augmented subproblem must satisfy . Then the augmented subproblem solution is optimized, for every RBPSR, by the constant solution . Feasible solutions to the augmented subproblem must satisfy either (i) , or (ii) . We need to show that there is no feasible solution of type (ii), which has a lower objective value, for any RBPSR, than solution (i).

For a given solution, let be an index such that and . By combining the premises of this lemma with property 2(i) of theorem 4.3, we find: , or more succinctly: . Now the monotonicity property 4.3 of lemma 12 shows that the value of , which is optimal for all BPSRs must be as large as allowed by the constraints. This means if we start at , then is optimized at the constraint . Next we set to see that is optimized at the next constraint . We keep incrementing , until we find the optimum for the augmented subproblem at the constant solution .

Given a subproblem , for which the optimal solution is constant (at ), we can form the augmented subproblem, with the additional constraint that the solution at must satisfy , for some such that . That is, the solution to the augmented subproblem must satisfy . Then the augmented subproblem solution is optimized, for every RBPSR, by the constant solution . The proof is similar to that of lemma 4.4, but here we invoke property 2(ii) of theorem 4.3, to find: and we use the monotonicity property 4.3 of lemma 12 to show that the value of , which is optimal for all RBPSR’s, must be as small as allowed by the constraints.

Given indices such that the optimal subproblem solutions for the two adjacent subproblems, and , are both constant and therefore (by theorem 4.3) have the respective values and , then, whenever , the optimal solution for the pooled subproblem is also constant, and has the value .

First consider the case . Since this forms a constant solution to subproblem , by theorem 4.3, the optimal solution is .

Next consider . The solution is not feasible. A feasible solution must obey , for some . There are three possibilities for the value of : (i) ; (ii) ; or (iii) . We examine each in turn:

If , then the left subproblem is augmented by the constraint , so that lemma 4.4 applies and it is optimized at the constant solution , while the right subproblem is not further constrained and is still optimized at . We can now optimize the total solution for by adjusting : By the monotonicity property 4.3 of lemma 12, the left subproblem objective and therefore also the total objective for is optimized at the upper boundary . In other words, in this case, the optimum for subproblem is a constant solution.

If , then lemma 4.4 applies to the left subproblem and lemma 4.4 applies to the right subproblem, so that both subproblems and therefore also the total objective for are all optimized at . In this case also we have a constant solution for .

If , then the right subproblem is augmented while the left subproblem is not further constrained. We can now use lemma 4.4 and property 4.3 of lemma 12, in a similar way to case (i) to show that in this case also, the optimum solution is constant. Since the three cases exhaust the possibilities for choosing , the optimal solution is indeed constant and by theorem 4.3 the optimum is at .

4.5 The PAV algorithm

We can now use theorems 4.2, 4.3 and 4.4 to construct a proof that a version of the pool-adjacent-violators (PAV) algorithm solves the whole problem . The PAV algorithm solves the problem stated in §2. The proof is constructive. The strategy is to satisfy the conditions for theorem 4.2, by starting with optimal constant subproblem solutions of length 1 and then to iteratively combine them via theorem 4.4, into longer optimal constant solutions until the total solution is feasible. The algorithm proceeds as follows:

input:

labels, .

weights, .

variables:

, a partitioning of problem into adjacent, non-overlapping subproblems.

, a tentative (not necessarily feasible) solution for problem .

loop invariant: For every subproblem :

The optimal subproblem solution is constant.

The partial solution is equal to the optimal subproblem solution, i.e. constant, with value (by theorem 4.3).

initialization: Let be the finest partitioning into subproblems, so that there are subproblems, each spanning a single index. Clearly every subproblem has a constant solution, optimized at , which is , if , or , if . This initial solution respects the loop invariant, but is most probably not feasible.

iteration: While is not feasible:

Find any pair of adjacent subproblems, , for which the solutions are equal or violate monotonicity: .

Pool and into one subproblem , by adjusting and by assigning the constant solution to , which by theorem 4.4 is optimal for , thus maintaining the loop invariant.

termination: Clearly the iteration must terminate after at most pooling steps, at which time is now feasible and is still optimal for every subproblem. By theorem 4.2, is then the unique optimal solution to problem .

5 The PAV-LLR algorithm

The PAV algorithm as presented above finds solutions in the form of probabilities. Here we show how to use it to find solutions in terms of log-likelihood-ratios

. It will be convenient here to express Bayes’ rule in terms of the logit function,

. Note logit is a monotonic rising bijection between

and the extended real line. Its inverse is the sigmoid function,

. Bayes’ rule is now [19]:

(15)

where the LHS is the

posterior log-odds

, is the log-likelihood-ratio, and is the prior log-odds.

The problem that is solved by the PAV-LLR algorithm can now be described as follows:

There is given:

Labels, . We denote as and the respective numbers of and labels in this sequence, so that .

Prior log-odds , where . This determines a prior probability distribution for the two classes, namely , which may be different from the label proportions .

An RBPSR

There is required a solution , which minimizes the following objective:

(16)
(17)
(18)
(19)

(The weights are chosen thus131313This kind of class-conditional weighting has been used in several formal evaluations of the technologies of automatic speaker recognition and automatic language recognition, to weight the error-rates of hard recognition decisions [20, 22] and more recently to also weight logarithmic proper scoring of recognition outputs in log-likelihood-ratio form [7, 27, 23]. to cancel the influence of the proportions of label types, and to re-weight the optimization objective with the given prior probabilities for the two classes, but we show below that this re-weighting is irrelevant when optimizing with PAV.)

The minimization is subject to the monotonicity constraint:

(20)

which by the monotonicity of (15) and the logit transformation is equivalent to (3). This problem is solved by first finding the probabilities via the PAV algorithm and then inverting (17) to find . We already know that the solution is independent of the RBPSR, but remarkably, it is also independent of the prior . This is shown in the following theorem: Let be given by (6), then the problem of minimizing objective (16), subject to monotonicity constraint (20) has the unique solution:

(21)

This solution is simultaneously optimal for every RBPSR, , and any prior log-odds, . By the properties of the PAV as proved in §4.5 and since logit is a strictly monotonic rising bijection, it is clear that for all RBPSR’s and for a given , this minimization is solved as

(22)

where determines and via (18) and (19). By corollary 4.1, we can write component of this solution, in closed form:

(23)

Now observe that:

(24)

which shows that is independent of . Now the prior may be conveniently chosen to equal the label proportion, , to give an un-weighted PAV, with .

6 Discussion

We have shown that the problem of monotonic, non-parametric calibration of binary pattern recognition scores is optimally solved by PAV, for all regular binary proper scoring rules. This is true for calibration in posterior probability form and also in log-likelihood-ratio form.

We conclude by addressing some concerns that readers may have about whether the optimization problem solved here is actually useful in real pattern recognition practice, where a calibration transform is trained in a supervised way (as here) on some training data, but is then utilized later on new unsupervised data.

The first concern we address is about the non-parametric nature of the PAV mapping, because for general real scores there will be new unmapped score values. An obvious solution is to map new values by interpolating between the (input,output) pairs in the PAV solution and this was indeed done in several of the references cited in this paper (see e.g. 

[30] for an interpolation algorithm).

Another concern is that the PAV mapping from scores to calibrated outputs has flat regions (all those constant subproblem solutions) and is therefore not an invertible transformation. Invertible transformations are information-preserving, but non-invertible transformations may lose some of the relevant information contained in the input score. This concern is answered by noting that expectations of proper scoring rules are generalized information measures [12, 11] and that in particular the expectation of the logarithmic scoring rule is equivalent to Shannon’s cross-entropy information measure [10]. So by optimizing proper scoring rules, we are indeed optimizing the information relevant to discriminating between the two classes. Also note that a strictly monotonic (i.e. invertible) transformation can be formed by adding an arbitrarily small strictly monotonic perturbation to the PAV solution. The PAV solution can be viewed as the argument of the infimum of the RBPSR objective, over all strictly rising monotonic transformations.

In our own work on calibration of speaker recognition log-likelihood-ratios [8], we have chosen to use strictly monotonic rising parametric calibration transformations, rather than PAV. However, we then do use the PAV calibration transformation in the supporting role of evaluating how well our parametric calibration strategies work. In this role, the PAV forms a well-defined reference against which other calibration strategies can be compared, since it is the best possible monotonic transformation that can be found on a given set of supervised evaluation data. It is in this evaluation role, that we consider the optimality properties of the PAV to be particularly important.

For details on how we employ PAV as an evaluation tool141414Our PAV-based evaluation tools are available as a free MATLAB toolkit here: http://www.dsp.sun.ac.za/~nbrummer/focal/, see [7, 21].

Acknowledgments

We wish to thank Daniel Ramos for hours of discussing PAV and calibration, and without whose enthusiastic support this paper would not have been written.

Appendix A Note on RBPSR family

Some notes follow, to place our definition of the RBPSR family, as defined in §1.2 in context of previous work. Our regularity condition (i), directly below (1), is adapted from [11, 16]. General families of binary proper scoring rules have been represented in a variety of ways (see [16] and references therein), including also integral representations that are very similar (but not identical in form) to our (1). See for example [13], where the form was used; or [9, 16] where was used. Equivalence to (1) is established by letting and . The advantage of the form (1) which we adopt here, is that the weighting function is always in the form of a normalized probability density, which gives the natural interpretation of expectation to these integrals.

The reader may notice that it is easy (e.g. by applying an affine transform to (1)) to find a binary proper scoring rule which satisfies the properties of lemma 12, but which is not in the family defined by (1). There are however equivalence classes of proper scoring rules, where the members of a class are all equivalent for making minimum-expected-cost Bayes decisions [12, 11]. Elimination of this redundancy allows normalization of arbitrary proper scoring rules in such a way that the family (1) becomes representative for the members of these equivalence classes [7].

References

  • [1] R.K. Ahuja and J.B. Orlin, “Solving the Convex Ordered Set Problem with Applications to Isotone Regression”, Sloan School of Management, MIT, SWP#3988, February 1998, retrieved online from http://www.mit.edu/bitstream.
  • [2] R.K. Ahuja and J.B. Orlin, “A fast scaling algorithm for minimizing separable convex functions subject to chain constraints,” Operations Research, 49, 2001, pp. 784–789.
  • [3] M. Avriel, W.E. Diewert, S. Schaible and I. Zang, Generalized Concavity, Plenum Press, 1988.
  • [4] Miriam Ayer, H.D. Brunk, G.M. Ewing, W.T. Reid and Edward Silverman, “An Empirical Distribution Function for Sampling with Incomplete Information”, Ann. Math. Statist. Volume 26, Number 4, 1955, pp.641–647.
  • [5] M.J. Best et al, “Minimizing Separable Convex Functions Subject to Simple Chain Constraints”, SIAM J. Opim., Vol. 10, No. 3, pp. 658–672, 2000.
  • [6] G.W. Brier,“Verification of forecasts expressed in terms of probability.”, Monthly Weather Review, 78, 1950, pp.1–3.
  • [7] N. Brümmer and J.A. du Preez, “Application-independent evaluation of speaker detection”, Computer Speech & Language, Volume 20, Issues 2-3, April–July 2006, pp.230–275.
  • [8] N. Brümmer et al., “Fusion of heterogeneous speaker recognition systems in the STBU submission for the NIST speaker recognition evaluation 2006”, IEEE Transactions on Audio, Speech, and Language Processing, vol.15, no.7, 2007, pp.2072–2084.
  • [9]

    A. Buja, W. Stuetzle, Yi Shen, “Loss Functions for Binary Class Probability Estimation and Classification: Structure and Applications”, 2005, online at

    www.wharton.upenn.edu/buja.
  • [10] T.M. Cover and J.A. Thomas, Elements of information theory, 1st Edition. New York: Wiley-Interscience, 1991.
  • [11] A.P. Dawid, “Coherent Measures of Discrepancy, Uncertainty and Dependence, with Applications to Bayesian Predictive Experimental Design”, Technical Report, online at http://www.ucl.ac.uk/Stats/research/Resrprts/abs94.html#139, 1998.
  • [12] M.H. DeGroot, Optimal Statistical Decisions. New York: McGraw-Hill, 1970.
  • [13] M.H. DeGroot and S. Fienberg, “The Comparison and Evaluation of Forecasters”, The Statistician 32, 1983.
  • [14] G. Doddington, “Speaker recognition—a research and technology forecast”, in Proceedings Odyssey 2004: The ISCA Speaker and Language Recognition Workshop, Toledo, 2004.
  • [15] T. Fawcett and A. Niculescu-Mizil, “PAV and the ROC Convex Hull”, Machine Learning, Volume 68, Issue 1, July 2007, pp. 97–106.
  • [16] T. Gneiting and A.E. Raftery, “Strictly Proper Scoring Rules, Prediction, and Estimation”, Journal of the American Statistical Association, Volume 102, Number 477, March 2007 , pp. 359–378.
  • [17] J. Gonzalez-Rodriguez, P. Rose, D. Ramos, D. T. Toledano and J.Ortega-Garcia, “Emulating DNA: Rigorous Quantification of Evidential Weight in Transparent and Testable Forensic Speaker Recognition”, IEEE Transactions on Audio, Speech and Language Processing, Vol. 15, no.7, September 2007, pp. 2104–2115.
  • [18] I.J. Good, “Rational Decisions”, Journal of the Royal Statistical Society, 14, 1952, pp.107–114.
  • [19] E.T. Jaynes, Probability Theory: The Logic of Science, Cambridge University Press, 2003.
  • [20] D.A. van Leeuwen, A.F. Martin, M.A. Przybocki and J.S. Bouten, “NIST and NFI-TNO evaluations of automatic speaker recognition”, Computer Speech and Language, Volume 20, Numbers 2–3, April–July 2006, pp. 128–158.
  • [21] D.A. van Leeuwen and N. Brümmer, “An Introduction to Application-Independent Evaluation of Speaker Recognition Systems”, in Christian Müller (Ed.): Speaker Classification I: Fundamentals, Features, and Methods. Lecture Notes in Computer Science 4343, Springer 2007, pp.330–353.
  • [22] A.F. Martin and A.N. Le, “The Current State of Language Recognition: NIST 2005 Evaluation Results”, in Proceedings of IEEE Odyssey 2006: The Speaker and Language Recognition Workshop, June 2006.
  • [23] A.F. Martin and A.N. Le, “NIST 2007 Language Recognition Evaluation”, to appear Proceedings of Odyssey 2008: The Speaker and Language Recognition Workshop, January 2008.
  • [24]

    A. Niculescu-Mizil and R. Caruana, “Predicting Good Probabilities With Supervised Learning”, in Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 2005.

  • [25] P.M. Pardalos and G. Xue, “Algorithms for a class of isotonic regression problems”, Algorithmica, 23, 1999, pp.211–222.
  • [26]

    J. Platt, “Probabilistic outputs for support vector machines and comparison to regularized likelihood methods”, in

    Advances in Large Margin Classifiers, A. Smola, P. Bartlett, B. Schölkopf, D. Schuurmans, eds., MIT Press, 1999, pp.61–74.
  • [27] M.A. Przybocki and A.N. Le, “NIST Speaker Recognition Evaluation Chronicles—Part 2”, in Proceedings of IEEE Odyssey 2006: The Speaker and Language Recognition Workshop, June 2006.
  • [28] V.A. Ubhaya, “An O(n) algorithm for least squares quasi-convex approximation”, Computers & Mathematics with Applications, Volume 14, Issue 8, 1987, pp.583–590.
  • [29] A. Wald, Statistical Decision Functions. Wiley, New York, 1950.
  • [30] W.J. Wilbur, L. Yeganova and Won Kim, “The Synergy Between PAV and AdaBoost”, Machine Learning, Volume 61, Issue 1–3, November 2005, pp.71–103.
  • [31] B. Zadrozny and C. Elkan, “Transforming classifier scores into accurate multiclass probability estimates”, In: Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining (KDD 02), 2002.