DeepAI
Log In Sign Up

Conformal Prediction: a Unified Review of Theory and New Challenges

In this work we provide a review of basic ideas and novel developments about Conformal Prediction – an innovative distribution-free, non-parametric forecasting method, based on minimal assumptions – that is able to yield in a very straightforward way predictions sets that are valid in a statistical sense also in in the finite sample case. The in-depth discussion provided in the paper covers the theoretical underpinnings of Conformal Prediction, and then proceeds to list the more advanced developments and adaptations of the original idea.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

10/26/2022

Distribution-Free Finite-Sample Guarantees and Split Conformal Prediction

Modern black-box predictive models are often accompanied by weak perform...
02/12/2021

The Importance of Being a Band: Finite-Sample Exact Distribution-Free Prediction Sets for Functional Data

Functional Data Analysis represents a field of growing interest in stati...
09/30/2019

The DUNE Framework: Basic Concepts and Recent Developments

This paper presents the basic concepts and the module structure of the D...
05/17/2018

Birnbaum-Saunders Distribution: A Review of Models, Analysis and Applications

Birnbaum and Saunders introduced a two-parameter lifetime distribution t...
06/03/2021

Conformal Prediction Bands for Multivariate Functional Data

Motivated by the pressing request of methods able to create prediction s...
09/04/2021

Confidence Distribution and Distribution Estimation for Modern Statistical Inference

This paper introduces to readers the new concept and methodology of conf...

1 Introduction

At the beginning of the third millennium, a new method of prediction with confidence, called Conformal Prediction (CP), was introduced and developed. It allows to produce prediction sets with the guaranteed error rate, exclusively under the simple i.i.d. assumption of the sample. Reliable estimation of prediction confidence is a significant challenge in both machine learning and statistics, and the promising results generated by CP have resulted in further extensions of the original conformal framework. The increasing amount of real-world problems where robust predictions are needed had yielded a plethora of new articles where CP is used.

In a nutshell, conformal prediction uses past experience in order to determine precise levels of confidence in new predictions. Using Gammerman et al. (1998)’s words in the very first work on the topic, it is “a practical measure of the evidence found in support of that prediction”. In order to do this, it estimates how “unusual” a potential example looks with respect to the previous ones. Prediction regions are generated plainly by including the examples that have quite ordinary values, or better those ones that are not very unlikely. Conformal algorithms are proven to be always valid: the actual confidence level is the nominal one, without requiring any specific assumption on the distribution of the data except for the i.i.d. assumption. There are many conformal predictors for any particular prediction problem, whether it is a classification problem or a regression problem. Indeed, we can construct a conformal predictor from any method for scoring the similarity (conformity, as it is called) of a new example with respect to the old ones. For this reason, it can be used with any statistical and machine learning algorithm. Efficient performances let to understand the growing interest on the topic over the last few years.

The milestone in the related literature is the book entitled Algorithmic learning in a random world, written by Vovk et al. (2005). It explains thoroughly all the theoretical fundamentals, and it was published in . There is only another work that gives an overview on the topic, a more recent one actually, namely the book Conformal prediction for reliable machine learning, by Balasubramanian et al. (2014). The mentioned book addresses primarily applied researchers, showing them the practical results that can be achieved and allowing them to embrace the possibilities CP is able to give. Therefore, its focus is almost totally on adaptations of conformal methods and the connected real-world applications.

In the latest years, an extensive research effort has been pursued with the aim of extending the framework, and several novel findings have been made. We have no knowledge of in-depth publications that aim to capture these developments, and to give a picture of recent theoretical breakthroughs. Moreover, there is a great deal of inconsistencies in the extensive literature that has been developed regarding notation. The need for such an up-to-date review is then evident, and the aim of this work is to address this need of comprehensiveness and homogeneity.

As in recent papers, our discussion is focused on CP in the batch mode. Nonetheless, properties and results concerning the online setting, where it was initially proposed, are not omitted.

The paper is divided into two parts: Part 2 gives an introduction to CP for non-specialists, explaining the main algorithms, describing their scope and also their limitations, while Part 3 discusses more advanced methods and developments. Section 2.1 introduces comprehensively the original version of conformal algorithm, and let the reader familiarize with the topic and the notation. In Section 2.2

, a simple generalization is introduced: each example is provided with a vector of covariates, like in classification or regression problems, which are tackled in the two related subsections. Section 

2.3

shows a comparison between CP and alternative ways of producing confidence predictions, namely the Bayesian framework and the statistical learning theory, and how CP is able to overcome their weak points. Moreover, we refer to an important result concerning the optimality of conformal predictors among valid predictors. Section 

2.4 deals with the online framework, where examples arrive one by one and so predictions are based on an accumulating data set.

In Part 3, we focus on three important methodological themes. The first one is the concept of statistical validity: Section 3.1 is entirely devoted to this subject and introduces a class of conformal methods, namely Mondrian conformal predictors, suitable to gain partially object conditional validity. Secondly, computational problems, and a different approach to conformal prediction — the inductive inference — to overcome the transductive nature of the basic algorithm (Section 3.2). Even in the inductive formulation, the application of conformal prediction in the case of regression is still complicated, but there are ways to face this problem (Section 3.3

). Lastly, the randomness assumption: conformal prediction is valid if examples are sampled independently from a fixed but unknown probability distribution. It actually works also under the slightly weaker assumption that examples are probabilistically exchangeable, and under other online compression models, as the widely used Gaussian linear model (Section 

3.4).

The last section (Section 3.5) addresses interesting directions of further development and research. We describe extensions of the framework that improve the interpretability and applicability of conformal inference. CP has been applied to a variety of applied tasks and problems. For this reason it is not possible here to refer to all of them: the interested reader can find an exhaustive selection in Balasubramanian et al. (2014).

2 Foundations of Conformal Prediction

2.1 Conformal Predictors

We will now show how the basic version of CP works. In the basic setting, successive values , called examples, are observed. is a measurable space, called the examples space. We also assume that contains more than one element, and that each singleton is measurable. Before the th value is announced, the training set111From a mathematical point of view it is a sequence, not a set consists of and our goal is to predict the new example.

To be precise, we are concerned with a prediction algorithm that outputs a set of elements of , implicitly meant to contain . Formally, a prediction set is a measurable function that maps a sequence to a set , where the measurability condition reads as follow: the set is measurable in . A trade-off between reliability and informativeness has to be faced by the algorithm while giving as output the prediction sets. Indeed giving as a prediction set the whole examples space is not appealing nor useful: it is absolutely reliable but not informative.

Rather than a single set predictor, we are going to deal with nested families of set predictors depending on a parameter , the significance level or miscoverage level, reflecting the required reliability of the prediction. The smaller is, the bigger the reliability in our guess. So, the quantity is usually called the confidence level. As a consequence, we define a confidence predictor to be a nested family of set predictors , such that, given , and ,

(2.1)

Confidence predictors from old examples alone, without knowing anything else about them, may seem relatively uninteresting. But the simplicity of the setting makes it advantageous to explain and understand the rationale of the conformal algorithm, and, as we will see, it is then straightforward to take into account also features related to the examples.

In the greatest part of the literature concerning conformal prediction, from the beginning and the very first works of Vovk et al. (1998), the symbol stands for the significance level. Nonetheless, we prefer to adopt the symbol , as in Lei et al. (2013), to be faithful to the statistical tradition and its classical notation. For the same reason, we want to predict the th example, relying on the previous experience given by , still like Lei et al. and conversely to Vovk et al.. The latter is interested in the th value given the previous ones.

2.1.1 The Randomness Assumption

We will make two main kinds of assumptions about the way examples are generated. The standard assumption is the randomness one (to be clear, the usual i.i.d. assumption commonly employed in the statistical setting): the examples we observe are sampled independently from some unknown probability distribution on . Equivalently, the infinite sequence is drawn from the power probability distribution in .

Under the exchangeability assumption, instead, the sequence is generated from a probability distribution that is exchangeable: for any permutation of the set , the joint probability distribution of the permuted sequence is the same as the distribution of the original sequence. In an identical way, the different orderings are equally likely. It is possible to extend the definition of exchangeability to the case of an infinite sequence of variables: are exchangeable if are exchangeable for every .

Exchangeability implies that variables have the same distribution. On the other hand, exchangeable variables need not to be independent. It is immediately evident how the exchangeability assumption is much weaker than the randomness one. As we will see in Section 2.4

, in the online setting the difference between the two assumptions almost disappears. For further discussion about exchangeability, including various definitions, a game-theoretic approach and a law of large numbers, refer to Section 3 of

Shafer and Vovk (2008).

The randomness assumption is a standard assumption in machine learning. Conformal prediction, however, usually requires only the sequence to be exchangeable. In addition, other models which do not require exchangeability can also use conformal prediction (Section 3.4).

2.1.2 Bags and Nonconformity Measures

First, the concept of a nonconformity (or strangeness) measure has to be introduced. In few words, it estimates how unusual an example looks with respect to the previous ones. The order in which old examples appear should not make any difference. To underline this point, we will use the term bag (in short, ) and the notation A bag is defined exactly as a multiset. Therefore, is the bag we get from when we ignore which value comes first, which second, and so on.

As mentioned, a nonconformity measure is a way of scoring how different an example is from a bag . There is not just one nonconformity measure. For instance, once the sequence of old examples is at hand, a natural choice is to take the average as the simple predictor of the new example, and then compute the nonconformity score as the absolute value of the difference from the average. In more general terms, the distance from the central tendency of the bag might be considered. As pointed out in Vovk et al. (2005), whether a particular function is an appropriate way of measuring nonconformity will always be open to discussion, as it greatly depends on contextual factors.

We have previously remarked that represents our miscoverage level. Now, for a given nonconformity measure , we set to stand for the nonconformity score — where is related in a certain way to the word “residual”. On the contrary, most of the literature uses and , respectively. We still prefer Lei et al.’s notation.

Instead of a nonconformity measure, a conformity one might be chosen. The line of reasoning does not change at all: we could compute the scores and resume to the first framework just by changing the sign, or computing the inverse. However, conformity measures are not a common choice.

2.1.3 Conformal Prediction

The idea behind conformal methods is extremely simple. Consider

i.i.d. (or even exchangeable) observations of a scalar random variable, let’s say

. The rank of another i.i.d. observation among

is uniformly distributed over the set

due to exchangeability.

Back to the nonconformity framework, under the assumption that the are exchangeable, we define, for a given :

(2.2)

where

(2.3)

and

(2.4)

It is straightforward that stands for the fraction of examples that are more different from the all the others than actually is. This fraction, which lies between and , is defined as the p-value for . If is small, then is very nonconforming with respect to the past experience, represented by . On the contrary, if large, then is very conforming and likely to appear as the next observation. Hence, it is reasonable to include it in the prediction set.

As a result, we define the prediction set by including all the s that conform with the previous examples. In a formula, To summarize, the algorithm tells us to form a prediction region consisting of all the s that are not among the fraction most out of place with respect to the bag of old examples. Shafer and Vovk (2008) give also a clear interpretation of

as an application of the Neyman-Pearson theory for hypothesis testing and confidence intervals.

2.1.4 Validity and Efficiency

The two main indicators of how good confidence predictors behave are validity and efficiency, respectively an index of reliability and informativeness. A set predictor is exactly valid at a significance level , if the probability of making an error — namely the event — is , under any probability distribution on . If the probability does not exceed , under the same conditions, a set predictor is defined as conservatively valid. If the properties hold at each of the significance level , the confidence predictor is respectively valid and conservatively valid. The following result, concerning conformal prediction, holds (Vovk et al., 2005):

Proposition 2.1.

Under the exchangeability assumption, the probability of error, , will not exceed , for any and any conformal predictor .

In an intuitive way, due to exchangeability, the distribution of and so the distribution of the nonconformity scores are invariant under permutations; in particular, all permutations are equiprobable. This simple concept is the bulk of the proof and the key of conformal methods.

From a practical point of view, the conservativeness of the validity is often not ideal, especially when is large, and so we get long-run frequency of errors very close to . From a theoretical prospective, Lei et al. (2018) indeed prove, under minimal assumptions on the residuals, that conformal prediction intervals are accurate, meaning that they do not substantially over-cover. Therefore, the coverage of conformal intervals is highly concentrated around .

A conformal predictor is always conservatively valid. Is it possible to achieve exact validity, in some way? Adding a bit of randomization into the algorithm, actually, it is. The smoothed conformal predictor is defined in the same way as before, except that the p-values (2.2) are replaced by the smoothed p-values:

(2.5)

where the tie-breaking random variable is uniformly distributed on ( can be the same for all s). For a smoothed conformal predictor, as wished, the probability of a prediction error is exactly (Vovk et al. (2005), Proposition 2.4).

Alongside validity, prediction algorithms should be efficient too, that is to say, the uncertainty related to predictions should be as small as possible. Validity is the priority: without it, the meaning of predictive regions is lost, and it becomes easy to achieve the best possible performance. Without restrictions, indeed, the trivial is the most efficient one. Efficiency may appear as a vague notion, but in any case it can be meaningful only if we impose some restrictions on the predictors that we consider.

Among the main problems solved by Machine Learning and Statistics we can find two types of problems: classification, when predictions deal with a small finite set (often binary), and regression, when instead the real line is considered. In classification problems, two criteria for efficiency have been used most often in literature. One criterion takes account of whether the prediction is a singleton (the ideal case), multiple (an inefficient prediction), or empty (a superefficient prediction) at a given significance level . Alternatively, the confidence and credibility of the prediction — which do not depend on the choice of a significance level — are considered. The former is the greatest for which is a single label, while the latter, helpful to avoid overconfidence when the object is unusual, is the largest for which the prediction set is empty. Vovk et al. (2016) show several other criteria, giving a detailed depiction of the framework. In regression problems instead, the prediction set is often an interval of values, and a natural measure of efficiency of such a prediction is simply the length of the interval. The smaller it is, the better its performance.

We will be looking for the most efficient confidence predictors in the class of valid, or in an equivalent term well-calibrated, confidence predictors; different notions of validity (including conditional validity, examined in Section 3.1) and different formalizations of the notion of efficiency will lead to different solutions to the problem.

2.2 Objects and Labels

In this section, we introduce a generalization of the basic CP setting. A sequence of successive examples is still observed, but each example consists of an object and its label , i.e . The objects are elements of a measurable space called the object space, and the labels of a measurable space called the label space (both in the classification and the regression contexts). As before, we take for granted that . In a more compact way, let stand for , and be the example space.

At the th trial, the object is given, and we are interested in predicting its label . The general scheme of reasoning is unchanged. Under the randomness assumption, examples, i.e. couples, are assumed to be i.i.d. First, we need to choose a nonconformity measure in order to compute nonconformity scores. Then, p-values are computed, too. Last, the prediction set turns out to be defined as follow:

(2.6)

In most cases, the way to proceed, when defining how much a new example is conform with the bag of old examples, is relying on a simple predictor . The only condition to hold is that must be invariant to permutations in its arguments — equivalently, the output does not depend on the order in which they are presented. The method defines a prediction rule. It is natural then to measure the nonconformity of by looking at the deviation of the predicted label from the true one. For instance, in regression problems, we can just take the absolute value of the difference between and . That’s exactly what we have suggested in the previous (unstructured) case (Section 2.1), when we proposed to take the mean or the median as the simple predictor for the next observation.

Following these steps any simple predictor, combined with a suitable measure of deviation of from , leads to a nonconformity measure and, therefore, to a conformal predictor. The algorithm will always produce valid nested prediction regions. But the prediction regions will be efficient (i.e. small) only if measures well how different is from the examples in . And consequently only if the underlying algorithm is appropriate. Conformal prediction ends up to be a powerful meta-algorithm, created on top of any point predictor — very powerful but yet extremely simple in its rationale.

A useful remark in Shafer and Vovk (2008) points out that the prediction regions produced by the conformal algorithm do not change when the nonconformity measure is transformed monotonically. For instance, if A is positive, choosing or its square will make no difference. While comparing the scores to compute , indeed, the interest is on the relative values and their reciprocal position — whether one is bigger than another or not, but not on the single absolute values. As a result, the choice of the deviation measure is relatively unimportant. The really crucial step in determining the nonconformity measure, again, is choosing the point predictor .

2.2.1 Classification

In the broader literature, CP has been proposed and implemented with different nonconformity measures for classification — i.e, when . As an illustration, given the sequence of old examples representing past experience, nonconformity scores can be computed as follow:

(2.7)

where is a metric on , usually the Euclidean distance in an Euclidean setting. The rationale behind the scores (2.7) — in the spirit of the -nearest neighbor algorithm — is that an example is considered nonconforming to the sequence if it is close to examples labeled in a different way and far from the ones with the same label. In a different way, we could use a nonconformity measure that takes account of the average values for the different labels, and the score is simply the distance to the average of its label.

As an alternative, nonconformity scores can be extracted from the support vector machines trained on

. We consider in particular the case of binary classification, as the first works actually did to face this problem (Gammerman et al., 1998; Saunders et al., 1999), but there are also ways to adapt it to solve multi-label classification problems (Balasubramanian et al., 2014). A plain approach is defining nonconformity scores as the values of the Lagrange multipliers, that stand somehow for the margins of the probability estimating model. If an example’s true class is not clearly separable from other classes, then its score

is higher and, as desired, we tend to classify it as strange.

Another example of nonconformity measure for classification problems is Devetyarov and Nouretdinov (2010)

, who rely on random forests. For instance, a random forest is constructed from the data sequence, and the conformity score of an example

is just equal to the percentage of correct predictions for its features

given by decision trees.

2.2.2 Regression

In regression problems, a very natural nonconformity measure is:

(2.8)

where is a measure of difference between two labels (usually a metric) and is a prediction rule (for predicting the label given the object) trained on the set .

It is evident how there is a fundamental problem in implementing conformal prediction for regression tasks: to form the prediction set (2.6), examining each potential label is needed. Nonetheless, there is often a feasible way to compute (2.6

) which does not require to examine infinitely many cases; in particular, this happens when the underlying simple predictor is ridge regression or nearest neighbors regression. We are going to provide a sketch of how it works, to give an idea of the way used to circumvent the unfeasible brute-force, testing-all approach. Besides, a slightly different approach to conformal prediction has been developed and carried on to overcome this difficulty (Section 

3.2, Section 3.3).

In the case where and is the ridge regression procedure, the conformal predictor is called the ridge regression confidence machine (RRCM). The initial attempts to apply conformal prediction in the case of regression involve exactly ridge regression (Melluish et al. (1999), and soon after, in a much better version, Nouretdinov et al. (2001a)). Suppose that objects are vectors consisting of attributes in a Euclidean space, say , and let be the non-negative constant called the ridge parameter — least squares is the special case corresponding to The explicit representation, in matrix form, of this nonconformity measure is:

(2.9)

where is the object matrix whose rows are , is the label vector , is the unit matrix. Hence, the vector of nonconformity scores can be written as where is the hat matrix.
Let be a possible label for , and the augmented data set. Now, . Note that and so the vector of nonconformity scores can be represented as , where: and Therefore, each has a linear dependence on . As a consequence, since the p-value simply counts how many scores are greater than , it can only change at points where changes sign for some This means that we can calculate the set of points on the real line whose corresponding p-value exceeds rather than trying all possible , leading to a feasible prediction. Precise computations can be found in Vovk et al. (2005), chap 2.

Before going on in the discussion, a clarification is required. The point is whether to include the new example in the bag with which we are comparing it or not — a delicate question, that Shafer and Vovk (2008) do not overlook in their precise work. In the statement of the conformal algorithm, we define the nonconformity score for the th example by: (2.3), apparently specifying that we do not want to include in the bag to which it is compared. But then, in the RRCM, we use the nonconformity scores (2.9), as if: First of all, it is noteworthy to assert that both of them are valid. That’s the most important thing. Moreover, the two ways of defining nonconformity scores are equivalent, to the extent that whatever we can get with one of them we can get from the other by changing the nonconformity measure. For example, if is the absolute value of the difference between and the mean value of the bag , including or not in the bag is absolute equivalent. Simple computations show that the two scores are the same, except for a scale factor . But we know that conformal prediction makes no difference for a monotone transformation of the scores. It does not indeed change the prediction regions. Analogous result holds, in regression problems, when the distance from the least square line or some other regression line is chosen.
There are cases where (2.3) might be more convenient, and cases where not. We have introduced conformal prediction with the formula (2.3), as the reference book of Vovk et al. (2005) and the first works did. Moreover, in this form conformal prediction generalizes to online compression models (Section 3.4). In general, however, the inclusion of the th example simplifies the implementation or at least the explanation of the conformal algorithm. From now on, we rely on this approach when using conformal prediction, and define instead the methods relying on (2.3) as jackknife procedures.

Conformal predictors can be implemented in a feasible and at the same time particularly simple way for nonconformity measures based on the nearest neighbors algorithm, too. Recently, an efficient method to compute in an exact way conformal prediction with the Lasso, i.e. considering the quadratic loss function and the

norm penalty, has been provided by Lei (2017). A straight extension to the elastic net — which considers both a and penalty, is also given.

2.3 Novelty of Conformal Prediction

The problem of prediction sets is well studied in the context of linear regression, where they are usually constructed under linear and Gaussian assumptions. The Gaussian assumption can be relaxed by using, for example, quantile regression. These linear-model-based methods usually have reasonable finite sample performance. However, the coverage is valid only when the regression model is correctly specified. In contrast, non-parametric methods have the potential to work for any smooth distribution, but only asymptotic results are available and the finite sample behaviour remains unclear. To sum up, none of these methods yields prediction bands with distribution-free, finite sample validity. Furthermore, the output is a prediction set in the form of an interval, which may not be optimal to catch the structure of the data (figure 

1). Conformal prediction instead is a general approach to construct valid and distribution-free prediction sets (and sequentially, in the online setting).

Figure 1: A comparison between conformal prediction bands, on the left, and quantile regression bands, on the right, for a selected confidence level . There are clear gaps in the data, indicating that the high density regions of the conditional density of given are not connected. The quantile regression approach obscures these features. Source: Lei and Wasserman (2014).

There are two other areas in statistics and machine learning that produce some kind of confidence information — a guarantee of the prediction error: the Bayesian framework and the theory of Probably Approximately Correct learning — PAC theory, in short (Valiant, 1984). Specifically, the Bayesian framework is able to complement individual predictions with probabilistic measures of their quality. These measures are, however, based on some a priori assumption about the underlying distribution. Burnaev and Vovk (2014) show that when the (artificial) data set satisfies the prior, the intervals produced are valid, and slightly tighter than the corresponding intervals produced by conformal algorithms. The problem is that for real-world data, the required knowledge is typically not available and as a result, one is forced to assume the existence of some arbitrarily chosen prior. In this case, since the assumed prior is most probably violated, the outputs of Bayesian methods may become quite misleading, due to the loss of validity (Melluish et al., 2001).

If we measure the efficiency of a prediction interval by its length, we can see that there is a certain dualism between Bayes and conformal prediction intervals: as the Bayesian assumption becomes less and less satisfied, the Bayes prediction intervals lose their validity while maintaining their efficiency, and, on the contrary, the conformal ones lose their efficiency while maintaining their validity. However, validity is more important than efficiency. Hence, if we believe the examples to be generated by a certain model, then we may want to use a nonconformity measure based on a method of prediction that is optimal for that model. This will be efficient if the proposed model is right, but valid in any case. Conformal prediction only assumes exchangeability. In the extreme case, paradoxically, even a function that returns a random nonconformity score (like rand) for all examples will be valid, but the prediction regions will be very wide. The dependence of the validity of prediction intervals on the Bayesian assumption is particularly serious in nonparametric statistics (Diaconis and Freedman, 1986).

On the other hand, PAC-learning can be applied to an algorithm in order to produce upper bounds on the probability of its error with respect to some confidence level. It only assumes that examples are generated independently by some unknown distribution, but for the resulting bounds to be interesting in practice, the data set must be particularly clean. As this is rarely the case, the bounds are typically very loose and therefore not particularly useful for real-world applications (Nouretdinov et al., 2001b). In addition, PAC theory has two more drawbacks: the majority of relevant results either involve large explicit constants or do not specify the relevant constants at all; the obtained bounds are for the overall error and not for individual predictions. Nevertheless, there are less theoretical and more effective ways of estimating the confidence in predictions, like the hold-out estimates. They are attained by randomly dividing examples in two separate partitions, one that is used for obtaining the prediction model and the other for testing it. The observed rate of errors on the test set then allows to assess the confidence to have in the prediction rule when new examples are considered. Conformal methods turn out to be a different way of producing hedged predictions.

Aside from the elegance of conformal prediction methods, at least in comparison with the procedure that relies on a hold-out sample, other features constitute important advantages (Vovk et al., 2005). First, there is no rigid separation between learning and prediction, which is the feature of the traditional approaches that makes hedged prediction feasible. Moreover, the hedged predictions produced by conformal algorithms are more accurate, without involving variable transformations or specifying a model. In addition, the confidence with which the label of a new object is predicted is always tailored not only to the previously seen examples but also to that object. Hence, rather than just providing a bound on the prediction error for the entire distribution, it allows to get different bounds for different instances, something which may be very valuable in many practical applications. For instance, in the medical domain, it is clearly more important to be able to evaluate the confidence in predictions related to individual patients instead of groups of patients.

To sum up, in contrast to Bayesian techniques, CP produces well-calibrated outputs as they are only based on the general randomness assumption, and no assumptions a priori about the distribution generating the data is needed. Moreover, unlike the PAC theory, they produce confidence measures that are useful in practice and are associated with individual predictions.

2.3.1 Optimality

The current literature highlights that conformal predictors are essentially the best confidence predictors (in the sense we are going to specify), when not the only ones, in a very natural class that satisfy the strong non-asymptotic property of validity. A couple of definitions are required. A confidence predictor is invariant if , for any permutation of the indices , i.e. it does not depend on the order in which are listed. Under the exchangeability assumption, this is a very natural class of confidence predictors. Later, however, we will also study confidence predictors that are not invariant, such as Mondrian and inductive conformal predictors, respectively in Section 3.1.1 and 3.2. In second place, given a couple of confidence predictors and , we say that is at least as good as if, for any significance level , holds for almost all generated by any exchangeable distribution on .

It turns out that any valid invariant confidence predictor is a conformal predictor or can be improved to become a conformal predictor (Shafer and Vovk, 2008).

Proposition 2.2.

Assume is a Borel space. Let be an invariant confidence predictor that is conservatively valid under exchangeability. Then there is a conformal predictor that is at least as good as .

2.4 The Online Framework

Conformal algorithms were originally introduced in the online framework, where examples arrive one by one and so predictions are based on an accumulating data set. The predictions these algorithms make are hedged: they incorporate a valid indication of their own accuracy and reliability. Vovk et al. (2005) claim that most existing algorithms for hedged prediction first learn from a training data set and then predict without ever learning again. The few algorithms that do learn and predict simultaneously, instead, do not provide confidence information.

Moreover, the property of validity of conformal predictors can be stated in an especially strong form in the online framework. Classically, a method for finding prediction regions is considered valid if it has a probability of containing the label predicted, because by the law of large numbers it would then be correct % of the times when repeatedly applied to independent data sets. However in the online picture, we repeatedly apply a method not to independent data sets, but to an accumulating data set. After using and to predict , we use and to predict , and so on. For a online method to be valid, % of these predictions must be correct. Under minimal assumptions, conformal prediction is valid in this new and powerful sense.

The intermediate step behind this result is that successive errors are probabilistically independent. In the spirit of comparison, consider i.i.d. random variables

drawn from a gaussian distribution. In a classical framework, Fisher’s well known prediction interval reads as:

(2.10)

where

(2.11)

The formula defined in (2.10) is assumed to be used several times, but in entirely separate problems. The online story may seem more complicated, because the experiment involved in predicting from is not entirely independent of the experiment involved in predicting, say, from . but this overlap does not actually matter. As shown for the first time in Shafer and Vovk (2008), the following holds:

Proposition 2.3.

Under the exchangeability assumption, in the online mode, predictors make errors at different steps independently.

Going back to conformal predictors, we already know that the probability of error is below the miscoverage level . In addition to that, events for successive are probabilistically independent notwithstanding the overlap. Hence, % of consecutive predictions must be correct. In other words, the random variables are independent Bernoulli variables with parameter . Vovk et al. (2009) focuses on the prediction of consecutive responses, especially when the number of observations does not exceed the number of parameters.

It should be noted that the assumption of exchangeability rather than randomness makes Proposition 2.3 stronger: it is very easy to give examples of exchangeable distributions on that are not of the form — where it is worth recalling that is the unknown distribution of examples. Nonetheless, in the infinite-horizon case (which is the standard setting for the online mode of prediction) the difference between the exchangeability and randomness assumptions essentially disappears: according to a well-known theorem by de Finetti, each exchangeable probability distribution on is a mixture of power probability distributions , provided is a Borel space (Hewitt, 1955). In particular, using the assumption of randomness rather than exchangeability in the case of the infinite sequence hardly weakens it: the two forms are equivalent when is a Borel space.

3 Recent Advances in Conformal Prediction

3.1 Different Notions of Validity

An appealing property of conformal predictors is their automatic validity under the exchangeability assumption:

(3.1)

where is the joint measure of . A major focus of this section will be on conditional versions of the notion of validity.

The idea of conditional inference in statistics is about the wish to make conclusions that are as much conditional on the available information as possible. Although finite sample coverage defined in (3.1) is a desirable property, this might not be enough to guarantee good prediction bands, even in very simple cases. We refer to (3.1) as marginal coverage, which is different from (in fact, weaker than) the conditional coverage as usually sought in prediction problems. As a result, a good estimator must satisfy something more than marginal coverage. A natural criterion would be conditional coverage.

However, distribution-free conditional coverage, that is:

(3.2)

with is impossible to achieve with a finite sample for rich object spaces, such as (Lei and Wasserman (2014), Lemma 1). Indeed, the requirement of precise object conditional validity cannot be satisfied in a nontrivial way, unless we know the true probability distribution generating the data (or we are willing to use a subjective or postulated probability distribution, as in Bayesian theory), or unless the test object is an atom of the data-generating distribution. If we impose that requirement, the prediction interval is expected to have infinite length (Vovk (2012) and for general background related to distribution-free inference Bahadur and Savage (1956), Donoho (1988)).

As a remark, it has been said that the distribution-free coverage offered by conformal intervals is marginal. The conditional coverage may be larger than at some values and smaller than at other values. This should not be considered as a disadvantage of conformal inference, because the statistical accuracy of conformal prediction bands crucially depends on the base estimator. In a sense, conformal inference broadens the scope and the value of any point estimator with nearly no costs: if the estimator is accurate (which usually requires an approximately correctly specified model, and a proper choice of tuning parameters), then the conformal prediction band is near-optimal; if the estimator is bad, then we still have valid marginal coverage. As a result, it makes sense to use a conformal prediction band as a diagnostic and comparison tool for regression function estimators.

The negative result — that conditional coverage cannot be achieved by finite-length prediction intervals without regularity and consistency assumptions on the model and the estimator — does not prevent set predictors to be (object) conditionally valid in a partial and asymptotic sense, and simultaneously asymptotically efficient.

Therefore, as an alternative solution, Lei and Wasserman (2014) develop a new notion, called local validity

, that naturally interpolates between marginal and conditional validity, and is achievable in the finite sample case. Formally, given a partition

of supp, a prediction band is locally valid with respect to if:

(3.3)

Then, their work is focused on defining a method that shows both finite sample (marginal and local) coverage and asymptotic conditional coverage (i.e., when the sample size goes to , the prediction band give arbitrarily accurate conditional coverage). At the same time, they prove it to be asymptotic efficient. The finite sample marginal and local validity is distribution free: no assumptions on are required. Then, under mild regularity conditions, local validity implies asymptotically conditionally validity.

The way Lei and Wasserman (2014) built the prediction bands to achieve local validity can be seen as a particular case of a bigger class of predictors, which now we introduce and explain, the so called Mondrian conformal predictors. Still on validity, recently Barber et al. (2019) reflect again on the idea of a proper intermediate definition.

3.1.1 Mondrian Conformal Predictors

We start from an example. In handwritten digit recognition problems, some digits (such as “5”) are more difficult to recognize correctly than other digits (such as “0”), and it is natural to expect that at the confidence level 95% the error rate will be significantly greater than 5% for the difficult digits; our usual, unconditional, notion of validity only ensures that the average error rate over all digits will be close to 5%.

We might not be satisfied by the way the conformal predictors work. If our set predictor is valid at the significance level 5% but makes an error with probability 10% for men and 0% for women, both men and women can be unhappy with calling 5% the probability of error. It is clear that whenever the size of the training set is sufficient for making conditional claims, we should aim for this. The requirement of object conditional validity is a little bit more than what we can ask a predictor to be, but it can be considered as a special case: for somehow important events we do not want the conditional probability of error given to be very different from the given significance level .

We are going to deal with a natural division of examples into several categories

: e.g., different categories can correspond to different labels, or kinds of objects, or just be determined by the ordinal number of the example. As pointed out in the examples above, conformal predictors — as we have seen so far — do not guarantee validity within categories: the fraction of errors can be much larger than the nominal significance level for some categories, if this is compensated by a smaller fraction of errors for other categories. A stronger kind of validity, validity within categories, which is especially relevant in the situation of asymmetric classification, is the main property of

Mondrian conformal predictors (MCPs), first introduced in Vovk et al. (2003). The exchangeable framework is the assumption under which MCPs are proved to be valid; in Section (3.4), again, we will have a more general setting, relaxing the hypothesis.

When the term categories comes into play, we are referring to a given division of the example space : a measurable function maps each to its category , belonging to the (usually finite) measurable space of all categories. In many instances, it is a kind of classification of . The category might depend on the other examples in the data sequence but disregarding their order. Such a function is called a Mondrian taxonomy, as a tribute to the Dutch painter Piet Mondrian. Indeed, the taxonomy that defines in the space recalls the grid-based paintings and the style for which the artist is renowned.

To underline the dependence of on the bag of the entire dataset, Balasubramanian et al. (2014) introduce the -taxonomy , which maps a vector of examples to the vector of corresponding categories. Using this notation, it is required that the -taxonomy is equivariant with respect to permutations, that is:

We prefer however to let the dependence implicit and remain stuck to the simpler notation of Vovk et al. (2005).

Given a Mondrian taxonomy , to use conformal prediction we have to modify slightly some of the definitions seen in the previous chapter. To be precise, a Mondrian nonconformity measure might take into account also the categories , while the p-values (2.2) should be computed as:

(3.4)

where . As a remark, we would like to point out and stress what we are exactly doing in the formula just defined. Although one can choose any conformity measure, in order to have local validity the ranking must be based on a local subset of the sample. Hence, the algorithm selects only the examples among the past experience that have the same category of the new one, and makes its decision based on them.

At this point, the reader is able to write by himself the smoothed version of the MCP, which satisfies the required level of reliability in an exact way. Indeed,

Proposition 3.1.

If examples are generated from an exchangeable probability distribution on , any smoothed MCP based on a Mondrian taxonomy is category-wise exact with respect to .

Moreover, we might want to have different significance levels for different categories . In some contexts, certain kinds of errors are more costly than others. For example, it may be more costly to classify a high-risk credit applicant as low risk (one kind of error) than it is to classify a low-risk applicant as high risk (a different kind of error). In an analogous way, we could be required to distinguish between useful messages and spam in the problem of mail filtering: classifying a useful message as spam is a more serious error than vice versa. We do not have misclassification costs to take into account, but setting in a proper way the miscoverage levels allow us to specify the relative importance of different kinds of prediction errors. And MCPs still do the job (Vovk et al., 2005).

Last, a brief discussion of an important question: how to select a good taxonomy? While choosing the partitions that determine a Mondrian taxonomy , it comes out indeed a dilemma that is often called the “problem of the reference class”. We want the categories into which we divide the examples to be large, in order to have a reasonable sample size for estimating the probabilities. But we also want them to be small and homogeneous, to make the inferences as specific as possible. Balasubramanian et al. (2014) points out a possible strategy for conditional conformal predictors in the problem of classification in the online setting. The idea is to adapt the method as the process goes on. At first, the conformal predictor should not be conditional at all. Then, as the number of examples grows, it should be label conditional. As the number of examples grows further, we could split the objects into clusters (using a label independent taxonomy) and make the prediction sets conditional on them as well.

3.2 Inductive Prediction

A relevant problem of conformal predictors is their computational inefficiency. Over time, an extensive literature has developed to address this issue. In particular, inductive conformal predictors (ICPs) have been proposed.

ICPs were first proposed by Papadopoulos et al. (2002a) for regression and by Papadopoulos et al. (2002b) for classification, and in the online setting by Vovk (2002)

. Before the appearance of inductive conformal predictors, several other possibilities had been studied, but not with great success. To speed computations up in a multi-class pattern recognition problem which uses support vector machines in its implementation,

Saunders et al. (2000) used a hashing function to split the training set into smaller subsets, of roughly equal size, which are then used to construct a number of support vector machines. In a different way, just to mention but a few, Ho and Wechsler (2004) exploit the adiabatic version of incremental support vector machine, and lately Vovk (2013) introduces Bonferroni predictors, a simple modification based on the idea of the Bonferroni adjustment of p-values.

Figure 2: Inductive and transductive approach to prediction.

We now spend some words to recall the concepts of transduction and induction (figure 2), as introduced in Vapnik (1998). In inductive prediction we first move from the training data to some general rule: a prediction or decision rule, a model, or a theory (inductive step). When a new object comes out, we derive a prediction based on the general rule (deductive step). On the contrary, in transductive prediction, we take a shortcut, going directly from the old examples to the prediction for the new object. The practical distinction between them is whether we extract the general rule or not. A side-effect of using a transductive method is computational inefficiency; computations need to be started from scratch every time.

Combining the inductive approach with conformal prediction, the data sequence
is split into two parts, the proper training set of size and the calibration set We use the proper training set to feed the underlying algorithm, and, using the derived rule, we compute the non-conformity scores for each example in the calibration set. For every potential label of the new unlabelled object , its score is calculated and is compared to the ones of the calibration set. Therefore the p-value is:

(3.5)

Inductive conformal predictors can be smoothed in exactly the same way as conformal predictors. As in the transductive approach, under the exchangeability assumption, is a valid p-value. All is working as before. For a discussion of conditional validity and various ways to achieve it using inductive conformal predictors, see Vovk (2012).

A greater computational efficiency of inductive conformal predictors is now evident. The computational overhead of ICPs is light: they are almost as efficient as the underlying algorithm. The decision rule is computed from the proper training set only once, and it is applied to the calibration set also only once. Several studies related to this fact are reported in the literature. For instance, a computational complexity analysis can be found in the work of Papadopoulos (2008)

, where conformal prediction on top of neural networks for classification has been closely examined.

With such a dramatically reduced computation cost, it is possible to combine easily conformal algorithms with computationally heavy estimators. While validity is taken for granted in conformal framework, efficiency is related to the underlying algorithm. Taking advantage of the bargain ICPs represent, we can compensate the savings in computational terms and, in metaphor, invest a lot of resources in the choice of .

Moreover, this computational effectiveness can be exploited further and fix conformal prediction as a tool in Big Data frameworks, where the increasing size of datasets represents a challenge for machine learning and statistics. The inductive approach makes the task feasible, but can we ask for anything more? Actually, the (trivially parallelizable) serial code might be run on multiple CPUs. Capuccini et al. (2015) propose and analyze a parallel implementation of the conformal algorithm, where multiple processors are employed simultaneously in the Apache Spark framework.

Achieving computational efficiency does not come for free. A drawback of inductive conformal predictors is their potential prediction inefficiency. In actual fact, we waste the calibration set when developing the prediction rule , and we do not use the proper training set when computing the p-values. An interesting attempt to cure this disadvantage is made in Vovk (2015). Cross-conformal prediction, a hybrid of the methods of inductive conformal prediction and cross-validation, consists, in a nutshell, in dividing the data sequence into folds, constructing a separate ICP using the th fold as the calibration set and the rest of the training set as the proper training set. Then the different p-values, which are the outcome of the procedure, are merged in a proper way.

Of course, it is also possible to use a uneven split, using a larger portion of data for model fitting and a smaller set for the inference step. This will produce sharper prediction intervals, but the method will have higher variance; this trade-off is unavoidable for data splitting methods. Common choices found in the applied literature for the dimension of the calibration set, providing a good balance between underlying model performance and calibration accuracy, lie between

and of the dataset. The problem related to how many examples the calibration set should contain is faced meticulously in Linusson et al. (2014). To maximize the efficiency of inductive conformal classifiers, they suggest to keep it small relative to the amount of available data (approximately of the total). At the same time, at least a few hundred examples should be used for calibration (to make it granular enough), unless this leaves too few examples in the proper training set. Techniques that try to handle the problems associated with small calibration sets are suggested and evaluated in both Johansson et al. (2015) and Carlsson et al. (2015), using interpolation of calibration instances and a different notion of (approximate) p-value, respectively.

Splitting improves dramatically on the speed of conformal inference, but it introduces additional noise into the procedure. One way to reduce this extra randomness is to combine inferences from several splits, each of them — using a Bonferroni-type argument — built at level . Multiple splitting on one hand decreases the variability as expected, but on the other hand this may produce, as a side effect, the width of to grow with . As described in Shafer and Vovk (2008), under rather general conditions, the Bonferroni effect is dominant and hence intervals get larger and larger with . For this reason, they suggest using a single split.

Linusson et al. (2014)

even raise doubts about the commonly accepted claim that transductive conformal predictors are by default more efficient than inductive ones. It is known indeed that an unstable nonconformity function — one that is heavily influenced by an outlier example, e.g., an erroneously labeled new example

— can cause (transductive) conformal confidence predictors to become inefficient. They compare the efficiency of transductive and inductive conformal classifiers using decision tree, random forest and support vector machine models as the underlying algorithm, to find out that the full approach is not always the most efficient. Their position is actually the same of Papadopoulos (2008), where the loss of accuracy introduced by induction is claimed to be small, and usually negligible. And not only for large data sets, which clearly contain enough training examples so that the removal of the calibration examples does not make any difference to the training of the algorithm.

From another perspective, lying between the computational complexities of the full and split conformal methods is jackknife prediction. This method wish to make a better use of the training data than the split approach does and to cure as much as possible the connected loss of informational efficiency, when constructing the absolute residuals, due to the partition of old examples into two parts, without resorting at the same time to the extensive computations of the full conformal prediction. With this intention, it uses leave-one-out residuals to define prediction intervals. That is to say, for each example it trains a model on the rest of the data sequence and computes the nonconformity score with respect to

The advantage of the jackknife method over the split conformal method is that it can often produce regions of shorter size. However, in regression problems it is not guaranteed to have valid coverage in finite samples. As Lei et al. (2018) observe, the jackknife method has the finite sample in-sample coverage property:

(3.6)

but when dealing with out-of-sample coverage (actually, true predictive inference), its properties are much more fragile. In fact, even asymptotically, its coverage properties do not hold without requiring nontrivial conditions on the base estimator . It is actually due to the approximation required to avoid the unfeasible enumeration approach, that we are going to tackle in a while, precisely in the next section. The predictive accuracy of the jackknife under assumptions of algorithm stability is explored by Steinberger and Leeb (2016) for the linear regression setting, and in a more general setting by Steinberger and Leeb (2018). Hence, while the full and split conformal intervals are valid under essentially no assumptions, the same is not true for the jackknife ones.

Although not theoretically valid, the jackknife procedures are shown to be empirically valid and informationally efficient. The key to speed up the learning process is to employ a fast and accurate learning method as the underlying algorithm. This is exactly what Wang et al. (2018) do, proposing a novel, fast and efficient conformal regressor, with combines the local-weighted (see Section 3.5

) jackknife prediction, and the regularized extreme learning machine. Extreme learning machine (ELM) addresses the task of training feed-forward neural networks fast without losing learning ability and predicting performance. The underlying learning process and the outstanding learning ability of ELM make the conformal regressor very fast and informationally efficient.

Recently, a slight but crucial modification to the algorithm gives life to the jackknife+ methods, able to restore rigorous coverage guarantees (Barber et al., 2019b).

3.3 Regression and Approximations

While examining the CP algorithm, the reader may notice that for each possible value (that is, for each potential value for the test data point ), we must refit the model . Depending on the setting, each run may be fairly expensive — but even disregarding cost, in general we cannot hope to run it infinitely many times, one for each .

In some settings, this problem can be circumvented using specific regularities within the model fitting algorithm (as the RRCM, Section 2.2). In nearly any other setting, however, we must instead turn to approximations of the full conformal prediction method.

Efficient approximations are available for kernel density estimator, as in

Lei et al. (2013), and kernel nonparametric regression (Lei and Wasserman, 2014). They exploit a result, known as the “sandwich lemma”, which provides a simple characterization of the conformal prediction set in terms of the plug-in estimators of density level set. Indeed, the set predictor, whose analytical form may be intractable, is ”sandwiched” by two kernel density level sets, with carefully tuned cut-off parameters, that can be computed quickly and maintain finite sample validity.

Except on these situations, two approaches are available. A straightforward way to approximate the algorithm is to fit it only for a finite set of values — for instance, taking a fine grid over some interval that includes the empirical range of the observed response values. That’s exactly how the conformalInference R package, developed in Lei et al. (2018), is implemented: in order to compute the conformal confidence predictor at a new covariate vector , it scans a set of grid points in the space . Chen et al. (2018) formalize this rounding procedure, proving that rounding can be done without losing the coverage guarantee of the method.

The second approach, commonly used in the inductive setting, relies instead on the quantiles of the fitted residual distribution. Let be the th smallest value among the nonconformity scores , where Actually, forms a probabilistic bound for the residuals at significance level ; that is, with probability , the nonconformity score of will be at most . The conformal set predictor is then:

(3.7)

It is self-evident how, as we improve the estimate of the underlying regression function , residuals get smaller, and the resulting prediction interval decreases in length.

3.4 Online Compression Models

The idea of conformal prediction can be generalized from learning under randomness, where examples are independent and identically distributed, to online compression models

. These models include, besides the exchangeability model, the Gaussian model, the Markov model, and many others.

In an online compression model (OCM), it is assumed that data can be summarized in way that can be updated as new examples show up, and the only probabilities given are backward probabilities — probabilities that explain how the updated summary might have been obtained. It is usually impossible to restore all the statistical information from the OCM’s summary (so they perform lossy compression), but it can be argued that the only information lost is noise, and the summary is a sufficient statistic, which store knowledge related to data, useful for predicting future examples, in an efficient way.

In general, an online compression model for an example space consists of a space , whose elements we call summaries, and:

  • a sequence of updating functions, to bring up to date the summary statistics as new examples come up. At the th trial, the function , given a summary and a new example , outputs the new summary ;

  • a sequence of one-step kernels . For each summary , the kernel defines a joint probability distribution , for an unknown summary and unknown example . It is required that the set of pairs such that has probability one.

The intuition behind the concept of OCM is that they are a way of summarizing statistical information. At the beginning we do not have any information, which is represented by the empty summary denoted with . When the first example arrives, we update our summary to , and so on, as depicted in figure 3.

Figure 3: Updating summaries in online compression models.

Moreover, we can also define the sequence of summarizing functions and of full kernels . maps a -tuple of examples to the summary , and it can be derived from the updating functions just by composition, while is equivalent to looking back all the way and so it can be carried out by combining, backwards from , one-step look-backs. Actually, is a Markov kernel, of the form . Such a kernel — and that’s the relevant detail — gives probabilities for the different that could have produced . Usually, online compression models are initially specified in terms of their summarizing functions and their full kernels , since these are in most of the cases easy to describe.

A more careful look at the exchangeability model is sufficient to identify the general structure of an online compression model. Indeed, we summarize examples simply by omitting information about their ordering; the ordered examples are summarized by a bag containing them. With the notation introduced above,
The backward-looking probabilities are equally simple: given the bag, the different possible orderings all have equal probability, as if the ordering resulted from drawing the examples successively at random from the bag without replacement. The probability of the event is:

(3.8)

and 0 otherwise, where the bag consists of different elements, each with cardinality . Other OCMs compress more or less drastically but have a similar structure.

As usual, to use conformal prediction, the starting point is a nonconformity measure, which in this case must be a function such that its value is small if seems very similar to the examples that might be summarized by , and vice versa. In the base case, without labels (as in Section 2.1), we have to decide whether to include in or not. Let and stand for random variables. The p-value is computed as:

(3.9)

Hence, as always, In the structured case, as presented in Section 2.2, the algorithm is exactly the same of the base case, once setting . Like under the randomness (or exchangeable) assumption, a law of large numbers for backward-looking probabilities holds too, and again we use it to justify confidence in conformal prediction regions. Nevertheless, in this general setting, there is no guarantee any more that conformal prediction regions are optimal.

3.4.1 Exchangeability-Within-Label

The first example of OCM we are going to introduce is still connected to the exchangeability assumption, but it is actually a relaxation of the hypothesis. Suppose only that the examples of each label are exchangeable with each other — so, the appearance of one label might change the probabilities for the next label. For instance, as in the work of Riabko (2005) aimed at relaxing the randomness assumption in online pattern recognition, consider the problem of hand-written character recognition in a text. The stream of characters is far from exchangeable (we strongly expect to meet “u” after “q”). However, the model here presented can be close to be correct: different instances of the character “a”, for example, can be almost exchangeable.

As explained in the book of Vovk et al. (2005), chap 8, the exchangeability-within-label model is a Mondrian model, where the category of an example is the label itself. Mondrian models are really interesting when we are willing to assume exchangeability across the categories, because the conformal predictions they produce will always be calibrated within categories.

3.4.2 Online Gaussian Linear Model

The online Gaussian linear model overlaps the exchangeability model, in the sense that the assumptions for both of the models can hold at the same time, but the assumptions for one of them can hold without the assumptions for the other holding. It is closely related to the classical Gaussian linear model. The strong result we report in the following is that conformal prediction, under these general assumptions, leads to the same prediction regions that are used for the classical model.

Consider examples of the form , with the label space being the real line and the object space being the -dimensional Euclidean space, The OCM here introduced is defined by the sequence of summarizing functions:

(3.10)

and the full kernel is the uniform probability distribution over the set of vectors consistent with the summary . Let be , in short. A vector is consistent with if it belongs to , namely if and

This is the intersection of a hyperplane with a sphere, may it be a lower-dimensional sphere or, if they are tangent, a point, and the kernel

distributes all its probability uniformly over it.

It is interesting, as Vovk et al. (2005) makes clear, that the probability distribution of under the linear regression statistical model where is the constant vector of regression coefficients and

are the errors, independent random variables with the same zero-mean normal distribution, does agree with the Gauss linear model. Still in the classical framework, it is useful to recall the following theoretical result: given an object

, once computed , that is the least squares prediction of its label based on the examples summarized in , the interval containing with probability reads as:

(3.11)

with

the (standard) unbiased estimate of the noise variance. For details, refer to any statistical book.

The online Gaussian linear model is tightly connected to the classical Gaussian linear model. We are going to give some results about the (close) relation between the classical and the online models (Shafer and Vovk, 2008). First, as just mentioned, but still worth repeating, if fulfill the assumptions of the classical Gaussian linear model, then they satisfy the assumptions of the online model. That is, assuming errors to be i.id., with mean zero, a common variance and a normal distribution, implies that, conditional on the summary , i.e. on and , the vector is distributed uniformly over the sphere defined by and . Second, the assumption of the online Gaussian linear model is sufficient to guarantee that

(3.12)

has the -distribution with degrees of freedom. Third, suppose is an infinite sequence of random variables. Then satisfy the assumptions of the online Gaussian linear model for every integer

if and only if the joint distribution of

is a mixture of distributions given by the classical Gaussian linear model, each model in the mixture possibly having a different and a different variance for the errors.

Therefore, it can be proved that, when the nonconformal measure is , which is a natural choice, the related conformal prediction region is exactly the classical (3.11)! Moreover, it has to be kept in mind that in the online setting these intervals are valid, in the sense that they are right of the times even though used on accumulating data (Section 2.4).

3.5 Other Interesting Developments

Full conformal and split conformal methods, combined with basically any fitting procedure in regression, provide finite sample distribution-free predictive inference. We are now going to introduce generalizations and further explorations of the possibilities of CP along different directions.

In the pure online setting, we get an immediate feedback (the true label) for every example that we predict. While this scenario is convenient for theoretical studies, in practice, however, rarely one immediately gets the true label for every object. On the contrary weak teachers are allowed to provide the true label with a delay or sometimes not to provide it at all. In this case, we have to accept a weaker (actually, an asymptotic) notion of validity, but conformal confidence predictors adapt and keep at it (Ryabko et al., 2003; Nouretdinov and Vovk, 2006).

Moreover, we may want something more than just providing p-values associated with the various labels to which a new observation could belong. We might be interested in the problem of probability forecasting: we observe pairs of objects and labels, and after observing the th object , the goal is to give a probability distribution for its label. It represents clearly a more challenging task (Vovk et al. (2005), chap 5), therefore a suitable method is necessary to handle carefully the reliability-resolution trade-off. A class of algorithms called Venn predictors (Vovk et al., 2004) satisfies the criterion for validity when the label space is finite, while only among recent developments there are adaptations in the context of regression, i.e. with continuous labels — namely Nouretdinov et al. (2018) and in a different way, following the work of Shen et al. (2018), Vovk et al. (2017). For many underlying algorithms, Venn predictors (like conformal methods in general) are computationally inefficient. Therefore Lambrou et al. (2012), and as an extension Lambrou et al. (2015), combine Venn predictors and the inductive approach, while Vovk et al. (2018) introduce cross-conformal predictive systems.

Online compression models is not the only framework where CP does not require examples to be exchangeable. Dunn and Wasserman (2018) extend the conformal method to construct valid distribution-free prediction sets when there are random effects, and Barber et al. (2019a) to handle weighted exchangeable data, as in the setting of covariate shift (Shimodaira, 2000; Chen et al., 2016b). Dashevskiy and Luo (2011) robustify the conformal inference method by extending its validity to settings with dependent data. They indeed propose an interesting blocking procedure for times series data, whose theoretical performance guarantees are provided in Chernozhukov et al. (2018).

Now, we describe more in details a couple of other recent advances.

3.5.1 Normalized Nonconformity Scores

In conformal algorithms seen so far, the width of is roughly immune to (figure 4, left). This property is desirable if the spread of the residual , where , does not vary substantially as varies. However, in some scenarios this will not be true, and we wish conformal bands to adapt correspondingly. Actually, it is possible to have individual bounds for the new example which take into account the difficulty of predicting a certain . The rationale for this, from a conformal prediction standpoint, is that if two examples have the same nonconformity scores using (2.8), but one is expected to be more accurate than the other, then the former is actually stranger (more nonconforming) than the latter. We are interested in resulting prediction intervals that are smaller for objects that are deemed easy to predict and larger for harder objects.

To reach the goal, normalized nonconformity functions come into play (figure 4, right), that is:

(3.13)

where the absolute error concerning the th example is scaled using the expected accuracy of the underlying model; see, e.g., Papadopoulos and Haralambous (2011), and Papadopoulos et al. (2011). Choosing (3.13), the confidence predictor (3.7) becomes:

(3.14)

As a consequence, the resulting predictive regions are in most cases much tighter than those produced by the simple conformal methods.

Figure 4: Conformal predictors do not contemplate heteroskedasticity in the data distribution. In such a case, one would expect the length of the output interval to be an increasing function of the corresponding variance of the output value, which can give more information of the target label. To tackle this problem, local-weighted conformal inference has been introduced. Source: Lei et al. (2018).

Anyway, using locally-weighted residuals, as in (3.13), the validity and accuracy properties of the conformal methods, both finite sample and asymptotic, again carry over.

As said, is an estimate of the difficulty of predicting the label . There is a wide choice of estimates of the accuracy available in the literature. A common practice is to train another model to predict errors, as in Papadopoulos and Haralambous (2010). More in details, once has been trained and the residual errors computed, a different model is fit using the object and the residuals. Then, could be set equal to , where is a sensitivity parameter that regulates the impact of normalization.

Other approaches use, in a more direct way, properties of the underlying model ; for instance, it is the case of Papadopoulos et al. (2008). In the paper, they consider conformal prediction with -NNR method, which computes the weighted average of the nearest examples, and as a measure of expected accuracy they simply use the distance of the examined example from its nearest neighbours. Namely,

(3.15)

The nearer an example is to its neighbours, the more accurate this prediction is indeed expected to be.

3.5.2 High-Dimensional Regression

Only a few works in literature deal with prediction sets in high-dimensional regression, where and . Current high-dimensional inference methods make strong assumptions while little is known about their robustness against model misspecification. Common approaches in this setting include greedy methods like forward step-wise regression, and -penalty based methods like the lasso. There is an enormous amount of work dedicated to studying various properties of these methods, but to our knowledge, not the same on set predictors.

In high-dimensional problems, estimators are inevitably more complicated and so the corresponding conformal prediction sets are much harder to characterize. On the other hand, conformal prediction is arguably most useful in such scenarios: model assumptions such as sparsity and low-intrinsic dimensionality are often not true, and the inferential tools developed under such hypotheses are often invalid under model misspecification.

Without any doubt, the most common way to proceed is based on combining the principle of conformal prediction with the -penalized least squares estimator. Over time, an extensive literature has developed on the topic. Hebiri (2010) describes an approximation of the conformalized lasso estimator — a partial conformalization indeed. This approximation leads to a big speedup over the original conformal prediction method build on top of the lasso, but loses the key property of conformal inference, the model free coverage guarantee. Recently, Steinberger and Leeb (2016) analyze the jackknife conformal method in the high-dimensional setting, but asymptotic validity is not for free and requires some assumptions on the base estimator (of the regression parameters). Meanwhile, Chen et al. (2016) propose a method which explores a smaller search space. Computational costs are so reduced by a constant fraction, but it still evaluates the prediction set on a grid of points. Lastly, as already mentioned, Lei (2017) develop an algorithm that efficiently and exactly computes the conformal prediction set for the lasso, in an analogous way, to a certain extent, to RRCM (Section 2.2.2).

More in general, Lei et al. (2018) think that the main way to approach high-dimensional problems lies in the simple, computationally efficient, and yet powerful method that split conformal inference represents. In their work, empirical properties of conformal methods under different simulated data settings are examined — from a simple (linear and classical) setup, to a heteroskedastic and heavy-tailed one, with correlated features. In particular, they compare performances between conformal prediction based on the ordinary linear regression estimator and classical parametric prediction intervals for linear models. Actually, in high-dimensional problems, the full conformal interval outperforms the parametric one in terms of both length and coverage across all settings, due to the poor accuracy of linear regression estimators when is large. Even the use of ridge regression does not change things. Moreover, looking at the different implementations of conformal prediction, the split method exhibits a clear computational advantage compared to the full one, guaranteeing similar performance. With such a dramatically reduced computation cost, as already mentioned but even more precious here, adopting split conformal in combination with computationally heavy estimators that involve cross-validation or bootstrap is considered as the best approach.

In the same work, they cast light on an interesting topic, i.e. how conformal inference can help with model-free variable selection. The aim is to construct model-free, prediction-based inferential statements about the importance of each covariate in the prediction model for given . To do so, they propose a notion of variable importance, called leave-one-covariate-out (or LOCO) inference. A random variable , for each covariate , is properly defined to measure the increase in prediction error due to not having access to that covariate in the data set. And consequently inferential statements about variable importance are carried out, based on these variables.

3.5.3 Functional Prediction Bands

Functional Data Analysis (FDA) is a branch of statistics that analyses data that exist over a continuous domain, broadly speaking functions. Functional data are intrinsically infinite dimensional. This is a rich source of information, which brings many opportunities for research and data analysis — a powerful modeling tool. Meanwhile the high or infinite dimensional structure of the data, however, poses challenges both for theory and computations. Therefore, FDA has been the focus of much research efforts in the statistics and machine learning community in the last decade.

There are few publications in the conformal prediction literature that deal with functional data. We are going to give just some details about a simple scenario that could be reasonably typical. In the following, the work of Lei et al. (2015) guide us. The sequence consists now of functions. The definition of validity for a confidence predictor is:

(3.16)

Then, as always, to apply conformal prediction, a nonconformity measure is needed. A fair choice might be:

(3.17)

where is the average of the augmented data set. Due to the dimension of the problem, an inductive approach is more desirable. Therefore, once the nonconformity scores are computed for the example functions of the calibration set, the conformal prediction set is given by all the functions whose score is smaller than the suitable quantile .
Then, one more step is mandatory. Given a conformal prediction set , the inherent prediction bands are defined in terms of lower and upper bounds:

(3.18)

Consequently, thanks to provable conformal properties,

(3.19)

However, could contain very disparate elements, hence no close form for and is available in general and these bounds may be hard to compute.

To sum up, the key features to be able to handle functional data efficiently are the nonconformity measure and a proper way to make use of the prediction set in order to extract useful information. The question is still an open challenge, but the topic stands out as a natural way for conformal prediction to grow up and face bigger problems.

An intermediate work in this sense is Lei et al. (2015), which studies prediction and visualization of functional data paying specific attention to finite sample guarantees. As far as we know, it is the only analysis up to now that applies conformal prediction to the functional setting. In particular, their focal point is exploratory analysis, exploiting conformal techniques to compute clustering trees and simultaneous prediction bands — that is, for a given level of confidence , the bands that covers a random curve drawn from the underlying process (as in 3.16).

However, satisfying (this formulation of) validity could be really a tough task in the functional setting. Since their focus is on the main structural features of the curve, they lower the bar and set the concept in a revised form, that is:

(3.20)

where is a mapping into a finite dimensional function space .

The prediction bands they propose are constructed, as (3.20) let it known in advance, adopting a finite dimensional projection approach. Once a basis of functions is chosen — let it be a fixed one, like the Fourier basis, or a data-driven basis, such as functional principal components — the vector of projection coefficients is computed for each of the examples in the proper training set. Then, the scores measure how different the projection coefficients are with respect to the ones of the training set, that is, for the th calibration example, . Let:

(3.21)

and

(3.22)

As a consequence, is valid, i.e. (3.20) holds.

Exploiting the finite dimensional projection, the nonconformity measure handles vectors, so all the experience seen in these two chapters gives a hand. A density estimator indeed is usually selected to assess conformity. Nevertheless, picking out is critical in the sense that a not suitable one may give a lot of trouble in computing . It is the case, for instance, of kernel density estimators. In their work, the first

elements of the eigenbasis — i.e. the eigenfunctions of the autocovariance operator — constitute the basis, while

is (the inverse of) a Gaussian mixture density estimator. In this set up, approximations are available, and lead to the results they obtain.

Though their work can be deemed as remarkable, the way used to proceed simplifies a lot the scenario. It is a step forward in order to extend conformal prediction to functional data, but not a complete solution. So, the extension of CP to FDA is still considered an important open question.

Acknowledgments

The authors acknowledge financial support from: ACCORDO Quadro ASI-POLIMI “Attività di Ricerca e Innovazione” n. 2018-5-HH.0, collaboration agreement between the Italian Space Agency and Politecnico di Milano; the European Research Council, ERC grant agreement no 336155-project COBHAM “The role of consumer behaviour and heterogeneity in the integrated assessment of energy and climate policies”; the “Safari Njema Project - From informal mobility to mobility policies through big data analysis”, funded by Polisocial Award 2018 - Politecnico di Milano.

References

  • Bahadur and Savage (1956) Bahadur, R. R. and Savage, L. J. (1956). The nonexistence of certain statistical procedures in nonparametric problems. The Annals of Mathematical Statistics, 27(4):1115–1122. 0084241
  • Balasubramanian et al. (2014) Balasubramanian, V., Ho, S.-S., and Vovk, V. (2014). Conformal prediction for reliable machine learning: theory, adaptations and applications. Newnes.
  • Barber et al. (2019a) Barber, R. F., Candes, E. J., Ramdas, A., and Tibshirani, R. J. (2019a). Conformal prediction under covariate shift. arXiv preprint arXiv:1904.06019.
  • Barber et al. (2019) Barber, R. F., Candes, E. J., Ramdas, A., and Tibshirani, R. J. (2019). The limits of distribution-free conditional predictive inference. arXiv preprint arXiv:1903.04684.
  • Barber et al. (2019b) Barber, R. F., Candes, E. J., Ramdas, A., and Tibshirani, R. J. (2019b). Predictive inference with the jackknife+. arXiv preprint arXiv:1905.02928.
  • Burnaev and Vovk (2014) Burnaev, E. and Vovk, V. (2014). Efficiency of conformalized ridge regression. In Conference on Learning Theory, pages 605–622.
  • Capuccini et al. (2015) Capuccini, M., Carlsson, L., Norinder, U., and Spjuth, O. (2015). Conformal prediction in Spark: large-scale machine learning with confidence. In 2015 IEEE/ACM 2nd International Symposium on Big Data Computing (BDC), pages 61–67. IEEE.
  • Carlsson et al. (2015) Carlsson, L., Ahlberg, E., Boström, H., Johansson, U., and Linusson, H. (2015). Modifications to p-values of conformal predictors. In

    International Symposium on Statistical Learning and Data Sciences

    , pages 251–259. Springer.
  • Chen et al. (2018) Chen, W., Chun, K.-J., and Barber, R. F. (2018). Discretized conformal prediction for efficient distribution-free inference. Stat, 7(1):e173. 3769053
  • Chen et al. (2016) Chen, W., Wang, Z., Ha, W., and Barber, R. F. (2016). Trimmed conformal prediction for high-dimensional models. arXiv preprint arXiv:1611.09933.
  • Chen et al. (2016b) Chen, X., Monfort, M., Liu, A., and Ziebart, B. D. (2016b). Robust covariate shift regression. In Artificial Intelligence and Statistics, pages 1270–1279.
  • Chernozhukov et al. (2018) Chernozhukov, V., Wuthrich, K., and Zhu, Y. (2018) Exact and robust conformal inference methods for predictive machine learning with dependent data. arXiv preprint arXiv:1802.06300.
  • Dashevskiy and Luo (2011) Dashevskiy, M., and Luo, Z. (2011). Time series prediction with performance guarantee. IET communications, 5(8):1044–1051.
  • Devetyarov and Nouretdinov (2010) Devetyarov, D. and Nouretdinov, I. (2010). Prediction with confidence based on a random forest classifier. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pages 37–44. Springer.
  • Diaconis and Freedman (1986) Diaconis, P. and Freedman, D. (1986). On the consistency of Bayes estimates. The Annals of Statistics, pages 1–26. 0829555
  • Donoho (1988) Donoho, D. L. (1988). One-sided inference about functionals of a density. The Annals of Statistics, 16(4):1390–1420. 0964930
  • Dunn and Wasserman (2018) Dunn, R. and Wasserman, L. (2018). Distribution-free prediction sets with random effects. arXiv preprint arXiv:1809.07441.
  • Gammerman et al. (1998) Gammerman, A., Vovk, V., and Vapnik, V. (1998). Learning by transduction. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence, pages 148–155. Morgan Kaufmann Publishers Inc.
  • Hebiri (2010) Hebiri, M. (2010). Sparse conformal predictors. Statistics and Computing, 20(2):253–266. 2610776
  • Hewitt (1955) Hewitt, E. and Savage, L.J. (1955). Symmetric measures on cartesian products. Transactions of the American Mathematical Society, 80(2):470–501. 0076206
  • Ho and Wechsler (2004) Ho, S.-S. and Wechsler, H. (2004). Learning from data streams via online transduction. Ma et al, pages 45–52.
  • Johansson et al. (2015) Johansson, U., Ahlberg, E., Boström, H., Carlsson, L., Linusson, H., and Sönströd, C. (2015). Handling small calibration sets in Mondrian inductive conformal regressors. In International Symposium on Statistical Learning and Data Sciences, pages 271–280. Springer.
  • Johansson et al. (2014) Johansson, U., Boström, H., Löfström, T., and Linusson, H. (2014). Regression conformal prediction with random forests. Machine Learning, 97(1-2):155–176. 3252831
  • Lambrou et al. (2015) Lambrou, A., Nouretdinov, I., and Papadopoulos, H. (2015). Inductive Venn prediction. Annals of Mathematics and Artificial Intelligence, 74(1-2):181–201. 3353902
  • Lambrou et al. (2012) Lambrou, A., Papadopoulos, H., Nouretdinov, I., and Gammerman, A. (2012). Reliable probability estimates based on support vector machines for large multiclass datasets. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pages 182–191. Springer.
  • Lei (2017) Lei, J. (2017). Fast exact conformalization of lasso using piecewise linear homotopy. arXiv preprint arXiv:1708.00427.
  • Lei et al. (2018) Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J. and Wasserman, L. (2018). Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523):1094–1111. 3862342
  • Lei et al. (2015) Lei, J., Rinaldo, A., and Wasserman, L. (2015). A conformal prediction approach to explore functional data. Annals of Mathematics and Artificial Intelligence, 74(1-2):29–43. 3353895
  • Lei et al. (2013) Lei, J., Robins, J., and Wasserman, L. (2013). Distribution-free prediction sets. Journal of the American Statistical Association, 108(501):278–287. 3174619
  • Lei and Wasserman (2014) Lei, J. and Wasserman, L. (2014). Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):71–96. 3153934
  • Linusson et al. (2014) Linusson, H., Johansson, U., Boström, H., and Löfström, T. (2014). Efficiency comparison of unstable transductive and inductive conformal classifiers. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pages 261–270. Springer.
  • Melluish et al. (2001) Melluish, T., Saunders, C., Nouretdinov, I., and Vovk, V. (2001). Comparing the Bayes and typicalness frameworks. In European Conference on Machine Learning, pages 360–371. Springer.
  • Melluish et al. (1999) Melluish, T., Vovk, V., and Gammerman, A. (1999). Transduction for regression estimation with confidence. In Neural information processing systems, NIPS’99.
  • Nouretdinov et al. (2001a) Nouretdinov, I., Melluish, T., and Vovk, V. (2001a). Ridge regression confidence machine. In ICML, pages 385–392.
  • Nouretdinov et al. (2018) Nouretdinov, I., Volkhonskiy, D., Lim, P., Toccaceli, P., and Gammerman, A. (2018). Inductive Venn-Abers predictive distribution. Proceedings of Machine Learning Research, 91:1–22.
  • Nouretdinov and Vovk (2006) Nouretdinov, I. and Vovk, V. (2006). Criterion of calibration for transductive confidence machine with limited feedback. Theoretical computer science, 364(1):3–9. 2268298
  • Nouretdinov et al. (2001b) Nouretdinov, I., Vovk, V., Vyugin, M., and Gammerman, A. (2001b). Pattern recognition and density estimation under the general iid assumption. In

    International Conference on Computational Learning Theory

    , pages 337–353. Springer.
  • Papadopoulos (2008) Papadopoulos, H. (2008). Inductive conformal prediction: Theory and application to neural networks. In Tools in artificial intelligence. InTech.
  • Papadopoulos et al. (2008) Papadopoulos, H., Gammerman, A., and Vovk, V. (2008). Normalized nonconformity measures for regression conformal prediction. In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2008), pages 64–69.
  • Papadopoulos and Haralambous (2010) Papadopoulos, H. and Haralambous, H. (2010). Neural networks regression inductive conformal predictor and its application to total electron content prediction. In International Conference on Artificial Neural Networks, pages 32–41. Springer.
  • Papadopoulos and Haralambous (2011) Papadopoulos, H. and Haralambous, H. (2011). Reliable prediction intervals with regression neural networks. Neural Networks, 24(8):842–851.
  • Papadopoulos et al. (2002a) Papadopoulos, H., Proedrou, K., Vovk, V., and Gammerman, A. (2002a). Inductive confidence machines for regression. In European Conference on Machine Learning, pages 345–356. Springer. 2050303
  • Papadopoulos et al. (2002b) Papadopoulos, H., Vovk, V., and Gammerman, A. (2002b). Qualified prediction for large data sets in the case of pattern recognition. In ICMLA, pages 159–163. 2805257
  • Papadopoulos et al. (2011) Papadopoulos, H., Vovk, V., and Gammerman, A. (2011). Regression conformal prediction with nearest neighbours. Journal of Artificial Intelligence Research, 40:815–840.
  • Ramsay and Silverman (2005) Ramsay, J. and Silverman, B. (2005). Functional Data Analysis. Springer Series in Statistics. Springer. 1889966
  • Riabko (2005) Riabko, D. (2005). On the flexibility of theoretical models for pattern recognition. PhD thesis, Citeseer.
  • Ryabko et al. (2003) Ryabko, D., Vovk, V., and Gammerman, A. (2003). Online region prediction with real teachers. Submitted for publication. Criterion of Calibration for Transductive Confidence Machine, 267.
  • Saunders et al. (1999) Saunders, C., Gammerman, A., and Vovk, V. (1999). Transduction with confidence and credibility. In Proceedings of the International Joint Conference on Artificial Intelligence, volume 2, pages 722–726.
  • Saunders et al. (2000) Saunders, C., Gammerman, A., and Vovk, V. (2000). Computationally efficient transductive machines. In International Conference on Algorithmic Learning Theory, pages 325–337. Springer.
  • Shafer and Vovk (2008) Shafer, G. and Vovk, V. (2008). A tutorial on conformal prediction. Journal of Machine Learning Research, 9:371–421. 2417240
  • Shen et al. (2018) Shen, J., Liu, R. Y., and Xie, M.-g. (2018). Prediction with confidence - a general framework for predictive inference. Journal of Statistical Planning and Inference, 195:126–140. 3760843
  • Shimodaira (2000) Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227–244. 1795598
  • Steinberger and Leeb (2016) Steinberger, L. and Leeb, H. (2016). Leave-one-out prediction intervals in linear regression models with many variables. arXiv preprint arXiv:1602.05801.
  • Steinberger and Leeb (2018) Steinberger, L. and Leeb, H. (2018). Conditional predictive inference for high-dimensional stable algorithms. arXiv preprint arXiv:1809.01412.
  • Valiant (1984) Valiant, L. G. (1984). A theory of the learnable. Communications of the ACM, 27(11):1134–1142.
  • Vapnik (1998) Vapnik, V. (1998). Statistical learning theory. 1998, volume 3. Wiley, New York.
  • Vovk (2002) Vovk, V. (2002). Online confidence machines are well-calibrated. In Foundations of Computer Science, 2002. Proceedings. The 43rd Annual IEEE Symposium on, pages 187–196. IEEE.
  • Vovk (2012) Vovk, V. (2012). Conditional validity of inductive conformal predictors. In Asian conference on machine learning, pages 475–490. 3080332
  • Vovk (2013) Vovk, V. (2013). Transductive conformal predictors. In IFIP International Conference on Artificial Intelligence Applications and Innovations, pages 348–360. Springer.
  • Vovk (2015) Vovk, V. (2015). Cross-conformal predictors. Annals of Mathematics and Artificial Intelligence, 74(1-2):9–28. 3353894
  • Vovk et al. (2016) Vovk, V., Fedorova, V., Nouretdinov, I., and Gammerman, A. (2016). Criteria of efficiency for conformal prediction. In Conformal and Probabilistic Prediction with Applications, pages 23–39. Springer International Publishing.
  • Vovk et al. (2005) Vovk, V., Gammerman, A., and Shafer, G. (2005). Algorithmic Learning in a Random World. Springer. 2161220
  • Vovk et al. (2003) Vovk, V., Lindsay, D., Nouretdinov, I., and Gammerman, A. (2003). Mondrian confidence machine. Technical Report.
  • Vovk et al. (2009) Vovk, V., Nouretdinov, I., Gammerman, A., et al. (2009). Online predictive linear regression. The Annals of Statistics, 37(3):1566–1590. 2509084
  • Vovk et al. (2018) Vovk, V., Nouretdinov, I., Manokhin, V., and Gammerman, A. (2018). Cross-conformal predictive distributions. In Conformal and Probabilistic Prediction and Applications, pages 37–51.
  • Vovk et al. (2004) Vovk, V., Shafer, G., and Nouretdinov, I. (2004). Self-calibrating probability forecasting. In Advances in Neural Information Processing Systems, pages 1133–1140.
  • Vovk et al. (2017) Vovk, V., Shen, J., Manokhin, V., and Xie, M. (2017). Nonparametric predictive distributions based on conformal prediction. Machine Learning, pages 1–30. 3917055
  • Wang et al. (2018) Wang, D., Wang, P., and Shi, J. (2018). A fast and efficient conformal regressor with regularized extreme learning machine. Neurocomputing, 304:1–11.