On the Relationship between Data Efficiency and Error for Uncertainty Sampling

06/15/2018 ∙ by Stephen Mussmann, et al. ∙ 0

While active learning offers potential cost savings, the actual data efficiency---the reduction in amount of labeled data needed to obtain the same error rate---observed in practice is mixed. This paper poses a basic question: when is active learning actually helpful? We provide an answer for logistic regression with the popular active learning algorithm, uncertainty sampling. Empirically, on 21 datasets from OpenML, we find a strong inverse correlation between data efficiency and the error rate of the final classifier. Theoretically, we show that for a variant of uncertainty sampling, the asymptotic data efficiency is within a constant factor of the inverse error rate of the limiting classifier.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Active learning offers potential label cost savings by adaptively choosing the data points to label. Over the past two decades, a large number of active learning algorithms have been proposed (Seung et al., 1992; Lewis & Gale, 1994; Freund et al., 1997; Tong & Koller, 2001; Roy & McCallum, 2001; Brinker, 2003; Hoi et al., 2009). Much of the community’s focus is on comparing the merits of different active learning algorithms (Schein & Ungar, 2007; Yang & Loog, 2016).

This paper is motivated by the observation that even for a fixed active learning algorithm, its effectiveness varies widely across datasets. Tong & Koller (2001) show a dataset where uncertainty sampling achieves 5x data efficiency, meaning that active learning achieves the same error rate as random sampling with one-fifth of the labeled data. For this same algorithm, different datasets yield a mixed bag of results: worse performance than random sampling (Yang & Loog, 2016), no gains (Schein & Ungar, 2007), gains of 2x (Tong & Koller, 2001), and gains of 3x (Brinker, 2003).

In what cases and to what extent is active learning superior to naive random sampling? This is an important question to address for active learning to be effectively used in practice. In this paper, we provide both empirical and theoretical answers for the case of logistic regression and uncertainty sampling, “the simplest and most commonly used” active learning algorithm in practice (Settles, 2010) and the best algorithm given in the benchmark experiments of Yang & Loog (2016).

Empirically, in Section 3, we study 21 binary classification datasets from OpenML. We found that the data efficiency for uncertainty sampling and inverse error achieved by training on the full dataset are correlated with a Pearson correlation of and a Spearman rank correlation of .

Theoretically, in Section 4

, we analyze a two-stage variant of uncertainty sampling, which first learns a rough classifier via random sampling and then samples near the decision boundary of that classifier. We show that the asymptotic data efficiency of this algorithm compared to random sampling is within a small constant factor of the inverse limiting error rate. The argument follows by comparing the Fisher information of the passive and active estimators, formalizing the intuition that in low error regimes, random sampling wastes many samples that the model is already confident about. Note that this result is different in kind than the

versus rates often studied in statistical active learning theory (Balcan et al., 2009; Hanneke, 2014), which focuses on convergence rates as opposed to the dependence on error. Together, our empirical and theoretical results provide a strong link between the data efficiency and the limiting error rate.

2 Setup

Consider a binary classification problem where the goal is to learn a predictor from input to output that has low expected error (0-1 loss), with respect to an underlying data distribution. In pool-based active learning, we start with a set of unlabeled input points . An active learning algorithm queries a point , receives its label , and updates the model based on . A passive learning algorithm (random sampling) simply samples points from uniformly randomly without replacement, queries their labels, and trains a model on this data.

2.1 Logistic Regression

In this work, we focus on logistic regression, where ,

is a weight vector, and

is the logistic function. A weight vector characterizes a predictor . Given a set of labeled data points (gathered either passively or actively), the maximum likelihood estimate is . Define the limiting parameters as the analogous quantity on the population: . A central quantity in this work is the limiting error, denoted . Note that we are interested in 0-1 loss (as captured by Err), though the estimator minimizes the logistic loss.

2.2 Uncertainty Sampling

In this work, we focus on “the simplest and most commonly used query framework” (Settles, 2010), uncertainty sampling (Lewis & Gale, 1994). This is closely related to margin-based active learning in the theoretical literature (Balcan et al., 2007).

Uncertainty sampling first samples data points randomly from , labels them, and uses that to train an initial model. For each of the next iterations, it chooses an data point from that the current model is most uncertain about (i.e., closest to the decision boundary), queries its label, and retrains the model using all labeled data points collected so far. See Algorithm 1 for the pseudocode (note we will change this slightly for the theoretical analysis).

  Input: Probabilistic model , unlabeled ,
  Randomly sample points without replacement from and call them .
  for each in  do
     Query to get label
  end for
  for  iterations do
     Query to get label
  end for
   and return
Algorithm 1 Uncertainty Sampling

2.3 Data Efficiency

Let and be the two estimators obtained by performing passive learning (random sampling) and active learning (uncertainty sampling), respectively. To compare these two estimators, we use data efficiency (also known as statistical relative efficiency (van der Vaart, 1998) or sample complexity ratio), which is the reduction in number of labeled points that active learning requires to achieve error compared to random sampling.

More precisely, consider the number of samples for each estimator to reach error :


where the expectation is with respect to the unlabeled pool, the labels, and any randomness from the algorithm. Then the data efficiency is defined as the ratio:


2.4 Data Efficiency Dependence on Dataset

The data efficiency depends on properties of the underlying data distribution. In the experiments (Section 3), we illustrate this dependence on show a variety of real-world datasets. As a simple illustration, we show this phenomenon on a simple synthetic data distribution. Suppose data points are sampled according to


where . This distribution over

is the standard Gaussian Naive Bayes model with means

and and covariance . See Figure 1 for the learning curves when and Figure 2 for when . We note that the data efficiency doesn’t even reach when , and the curves get closer with more data. On the other hand, when , the data efficiency exceeds and increases dramatically. This illustrates the wildly different gains of active learning, depending on the dataset. In particular, the data efficiency is higher for the less noisy dataset, as the thesis of this work predicts.

Figure 1: Active learning yields meager gains when the clusters are closer together (). The data efficiency is about 1x to get to 23% error; both algorithms require approximately the same amount of data to achieve that error.
Figure 2: Active learning yields spectacular gains when the clusters are farther apart (). The data efficiency is about 5x to get to 16% error; passive learning requires about 1000 data points to achieve that error, while active learning only requires about 200.

3 Experiments

3.1 Datasets

We wish to study the data efficiency of active learning versus passive learning across a comprehensive set of datasets which are “typical” of real-world settings. Capturing a representative set of datasets is challenging, and we wanted to be as objective and transparent about the process as possible, so we detail the dataset selection process below.

We curated a set of datasets systematically from OpenML, avoiding synthetic or degenerate cases. In August 2017, we downloaded all 7968 datasets. We removed datasets with missing features or over 1 million data points. We wanted a large unlabeled pool (relative to the number of features) so we kept datasets where the number of features was less than 100 and the number of data points was at least 10,000. In our experiments, we allow each algorithm to query the label of points, so this filtering step ensures that and . We remark that more than 98% of the datasets were filtered out because they were too small (had fewer than 10,000 points). 138 datasets remained.

We further removed datasets that were synthetic, had unclear descriptions, or were duplicates. We also removed non-classification datasets. For multiclass datasets, we processed them to binary classification by predicting majority class versus the rest. Of the 138 datasets, 36 survived.

We ran standard logistic regression on training splits of these datasets. In 11 cases, logistic regression was less than 1% better than the classifier that always predicts the majority class. Since logistic regression was not meaningful for these datasets, we removed them, resulting in 25 datasets.

On one of these datasets, logistic regression achieved 0% error with fewer than data points. On another dataset, the performance of random sampling became worse as the number of labels increased. On two datasets, active learning achieved at least 1% error lower than the error with the full training set, a phenomenon that Schohn & Cohn (2000) calls “less is more”; this is beyond the scope of this work. We removed these four cases, resulting a total of 21 datasets.

The final 21 datasets has a large amount of variability, from healthcare, game playing, control, ecology, economics, computer vision, security, and physics.

3.2 Methodology

We used a random sampling seed of size and plotted the learning curves up until a labeling budget of . We calculated the data efficiency at the lower of the errors achieved with the budget by active and passive learning. As a proxy for the limiting error, we use the error on the test set obtained by a classifer trained on the full training set.

3.3 Results

Figure 3

plots the relationship between data efficiency and the inverse error across all datasets. To remove outliers, we capped the inverse error at

; this truncated the inverse error of three datasets which had inverse error of , , and which corresponds to errors less than around 0.5%. The correlation ( of the best linear fit) is . Further, the data efficiency and the inverse error have a Spearman rank correlation of .

Figure 3: Scatterplot of the data efficiency of uncertainty sampling versus the inverse error using all the data. Line of best fit has 0.789 , also known as the Pearson Correlation.

In summary, we note that data efficiency is closely tied to the inverse error. In particular, when the error is below 10%, the data efficiency is at least 3x and can be much higher.

4 Theoretical Analysis

In this section, we provide theoretical insight into the inverse relationship between data efficiency and limiting error. For tractability, we study the asymptotic behavior as the number of labels tends to infinity.

Let be the underlying data distribution over . For uncertainty sampling, there are three data quantities: , the number of seed data points; , the amount of labeled data (the budget); and , the number of unlabeled points in the pool. We will assume that and are functions of , and we let go to infinity. In particular, we wish to bound the value of where Err is the limiting error defined in Section 2.1. Bounding for small is closely related to the statistical asymptotic relative efficiency (van der Vaart, 1998). We use data efficiency as it applies for finite .

The asymptotic data efficiency only makes sense if the random sampling and uncertainty sampling both converge to the same error. Otherwise, the asymptotic data efficiency would either be or . While bias in active learning is an important topic of study (Liu et al., 2015), it is beyond the scope of this work. We will make an assumption that ensures this is satisfied if the model is well-specified in some small slab around the decision boundary.

4.1 Two-stage Variant of Uncertainty Sampling

Because of the complicated coupling between uncertainty sampling draws and updates, we analyze a two-stage variant: we gather an initial seed set using random sampling from the unlabeled dataset, and then gather the points closest to the decision boundary learned from the seed data. This two-stage approach is similar to other active learning work (Chaudhuri et al., 2015).

Thus, we only update the parameters twice: after the seed round we train on the seed data, and after we have collected all the data, we train on the data that was collected after the seed data. We do not update the parameters between draws closest to the decision boundary.

Also, instead of always choosing the point closest to decision boundary without replacement during the uncertainty sampling phase, with probability we randomly sample from the unlabeled pool and with probability we choose the point closest to the decision boundary. The random sampling proportion ensures that the empirical data covariance is non-singular for uncertainty sampling.

4.2 Sketch of Main Result

Under assumptions that will be described later, our main result is that there exists some such that for any ,


where is a constant bounding a ratio of conditional covariances in the directions orthogonal to . In particular, if the pdf factorizes into two marginal distributions (decomposition of into two independent components), one along the direction of and one in the directions orthogonal to , then the conditional covariances orthogonal to are equal, and . If the distribution is additionally symmetric across the decision boundary, we obtain


We now give a rough proof sketch. The core idea is to compare the Fisher information of active and passive learning, similar to other work in the literature (Sourati et al., 2017). It is known that the Fisher information matrix for logistic regression is


where . Note that only depends on the part of parallel to . If the data decomposes into two independent components as mentioned above, then


if we ignore the dimension of the Fisher information along which doesn’t end up mattering (it only changes the magnitude of which is independent of the 0-1 loss). Additionally, since uncertainty sampling samples at the decision boundary where , we have and thus active learning achieves:


The Fisher information determines the asymptotic rate of convergence of the parameters:


Intuitively, this convergence rate is monotonic with which means the ratio (abuse of notation, but true for any linear function of the inverse Fisher information) of the inverse Fisher information matrices gives the asymptotic relative rate,


If the optimal logistic model is calibrated, meaning the model’s predicted probabilities are on average correct, then


Putting these together, we get:


Having given the rough intuition, we now go through the arguments more formally.

4.3 Notation

Let be the limiting parameters, Let be the weights after the seed round for active learning, and be the weights at the end of learning with labels.

We include a bias term for logistic regression by inserting a coordinate at the beginning of that is always . Thus, and is the bias term of the optimal parameters. As a simplification of notation, the pdf is only a function of the non-bias coordinates (otherwise, such a pdf wouldn’t exist).

Since logistic regression is invariant to translations (we can appropriately change the bias) and rotations (we can rotate the non-bias weights), without loss of generality, we will assume that , that the bias term is , and that the data is mean for all directions orthogonal to , ().

4.4 Assumptions

We have four types of assumptions: assumptions on the values of and , assumptions on the distribution of , assumptions on the distribution of , and non-degeneracy assumptions. As an example, all these assumptions are satisfied if , , is a mixture of truncated, mollified Gaussians, and is well-specified for non-zero weights.

4.4.1 Assumptions relating

Recall that is the number of labels for the seed round, is the labeling budget, and is the number of unlabeled data points.

Assumption 1 (Data Pool Size).


Assumption 2 (Seed Size).

for some and .

We need the size of the unlabeled pool has to be large enough so that uncertainty sampling can select points close to the decision boundary. We require that the seed for uncertainty sampling is large enough to make the decision boundary after the seed round converge to the true decision boundary, and we require that it is small enough so that it doesn’t detract from the advantages of uncertainty sampling.

4.4.2 Assumption on distribution

We assume that the distribution on has a pdf (“continuous distribution”), and the following two conditions hold:

Assumption 3 (Bounded Support).
Assumption 4 (Lipschitz).

The pdfs and conditional pdfs are all Lipschitz.

4.4.3 Assumptions on distribution

These next three assumptions (Assumptions 57) are implied if the logistic regression model is well-specified (), but they are strictly weaker. If the reader is willing to assume well-specification, this section can be skipped.

Assumption 5 (Local Expected Loss is Zero).

There exists such that for ,


Assumption 5 is satisfied if model is well-specified in any thin slab around the decision boundary defined by . We need this assumption to conclude that our two-stage uncertainty sampling algorithm converges to .

Assumption 6 (Conditions on Zero-One Loss).

Let be the zero-one loss of the classifier defined by the weights . Then,

  • is twice-differentiable at ,

  • has a local optimum at , and

  • .

In order to conclude that convergence to the optimal parameters implies convergence in error, we need Assumption 6. The strongest requirement is the local optimum part. The twice differentiable condition is a regularity condition and the Hessian condition is generically satisfied.

Assumption 7 (Calibration).

We say a model is calibrated if the probability of a class, conditioned on the model predicting probability , is . Assumption 7 amounts to assuming that the logistic model with the optimal parameters is calibrated. Note that this is significantly weaker than assuming that the model is well-specified (). For example, the data distribution in Figure 4 is calibrated but not well-specified.

Figure 4: Example of distribution with deterministic labels which is calibrated but not well-specified for logistic regression.

These three assumptions all hold if the logistic distribution is well-specified, meaning .

4.4.4 Non-degeneracy

Define as the marginal probability density of selecting a point at the decision boundary. More precisely, is the probability density integrated over the

dimensional hyperplane manifold defined by

. Equivalently,

is the probability density of the random variable

at .

Assumption 8 (Non-degeneracies).

Let us interpret these four conditions. We assume that the probability density at the decision boundary is non-zero, , otherwise uncertainty sampling will not select points close to the decision boundary (note this is not an assumption about the probability mass). We assume that , meaning that the classifier is not degenerate, with all points on the decision boundary. We assume , meaning the logistic parameters do not achieve 0% error. Finally, we assume , meaning that the data covariance is non-singular, or equivalently, that the parameters are identifiable.

4.5 Proofs

We will first prove a condition on the convergence rate of the error based on a quantity closely related to the Fisher Information. However, we can’t rely on the usual Fisher information analysis, which does not connect to the zero-one loss, but rather to the asymptotic normality of the parameters. Thus, our conditions for this key lemma are slightly stronger than the asymptotic normality result of Fisher Information.

4.5.1 Rates Lemma in Terms of

The logistic loss (negative log-likelihood) for a single data point under logistic regression is


Further, the gradient and Hessian are,


Note that .

Following the Fisher Information asymptotic normality analysis, note that




with . This is justified by Taylor’s theorem since the logistic loss is smooth.

From these, we can define the key quantity that is equivalent to the inverse Fisher Information under stronger conditions.

Definition 4.1.

If (non-singular and symmetric) and exists, then define


This quantity is important because of the following lemma, which translates comparisons in the asymptotic variances to comparisons of data efficiency. Recall that without loss of generality, we let

. Define as the matrix without the first row and column.

Lemma 4.1.

If we have two estimators with asymptotic variances and , and for any and both estimators, and , then


implies that for some and any ,


The proof is in the appendix. This lemma only requires Assumption 6, the condition on at , and is possibly of independent interest.

Note that with the bias term, our weight vector is dimensional, so is a square dimensional matrix. However, without the first row and column, is a square dimensional matrix. The fact that the rates depend on instead of is necessary for our results. Intuitively, the first coordinate (in direction of ) has slow convergence for uncertainty sampling since we are selecting points near the decision boundary which have small projection onto and thus we gain little information about the dependence of on . However, because our analysis is in terms of the convergence of the 0-1 error rather than convergence of the parameters, the above lemma doesn’t depend on the convergence rate of the first coordinate.

From this lemma, it follows that if


then for sufficiently small error,


4.5.2 Specific Calculations for Algorithms

In proving the later results, it’s useful to first establish the consistency of our algorithms. Assumption 5 is used here.

Lemma 4.2.

Both two-stage uncertainty sampling and random sampling converge to .

Next, we need our two algorithms satisfy the conditions of Lemma 4.1.

Lemma 4.3.

For our active and passive learning algorithms, for any , and .

Now, we are ready for the computation of (Definition 4.1), the quantity closely related to the inverse Fisher Information.

Lemma 4.4.

The proof is in the appendix. The proof relies on calibration, Assumption 7, to ensure that , which is always true for well-specified models.

This lemma gives as exactly the inverse Fisher information that was mentioned earlier. It is the expected value of .

Lemma 4.5.

The proof is in the appendix. The proof relies on the assumptions of bounded support and Lipshitz pdf, Assumptions 3 and 4.

Because we randomly sample for proportion, a factor of times shows up. Additionally, we get a factor for the expected value of at the decision boundary. We will almost surely never sample exactly at the decision boundary, but as , the seed round weights and , we sample closer and closer to the decision boundary.

4.5.3 Results

Here, we define that quantifies how much the covariance at the decision boundary differs from the covariance for the rest of the distribution, which is a key dependency of our most general theorem. Denote as the vector without the first index. Recall that without loss of generality, .

Definition 4.2.

We define in terms of and ,


We can give an interpretation to these constants. Define as the covariance of the directions orthogonal to at the slice . Then, is simply , the covariance at the decision boundary.

Further, define a variable that is weighted by :


Then, , the covariance over the whole distribution, but weighted higher near the decision boundary with exponential tails. Finally, compares how much these two covariances differ.

Intuitively, we need this parameter to handle the case where the covariance at the decision boundary (a factor of the for active learning) is small relative to the average covariance.

Here is our main theorem which is proved by showing that


and then using Lemma 4.1.

Theorem 4.1.

For sufficiently small constant (that depends on the dataset) and for ,


We can also get an upper bound on the data efficiency if we make an additional assumption that the pdf of factorizes into two marginal distributions (decomposition of into two independent components), one along the direction of and one in the directions orthogonal to .

Theorem 4.2.

If and , then for sufficiently small constant (that depends on the dataset), and for ,


We can therefore see from these results that there is an inverse relationship between the asymptotic data efficiency and the population error, shedding light and giving a theoretical explanation to the empirical observation made in Section 3.

5 Discussion and Related Work

The conclusion of this work, that data efficiency is inversely related to limiting error, has been hinted at by a couple sentences in empirical survey papers. Schein & Ungar (2007) states “the data sets sort neatly by noise, with [uncertainty] sampling failing on more noisy data … and performing at least as well as random [sampling] for [less noisy] data sets.” Yang & Loog (2016) states “For the [less noisy] datasets, random sampling does not achieve the best performance …, which may indicate that we need only consider random sampling on relatively [noisy] tasks”.

Additionally, this conclusion has evidence from statistical active learning theory (Hanneke, 2014). While not mentioned in the work, the ratio between the passive and active bounds points to a factor (though with Err being the optimal error over classifiers, not the MLE classifier). More specifically, the ratio between the passive and active lower bounds converges to as . Additionally, the ratio of active and passive algorithms converge to ; however with a factor of a disagreement coefficient which has a dimension dependence for linear classifiers and a factor which “is sometimes possible to remove” (Hanneke, 2014).

This conclusion can be used in practice in at least two possible ways. First, a pilot study or domain knowledge can be used to get a rough estimate of the final error and if the error is low enough (less than around 10%), uncertainty sampling can be used. Additionally, random sampling could be run until the test error is below 10% and then a switch be made to uncertainty sampling.

Does our conclusion hold for other models? Because of the mathematical similarity to SVM, it’s likely it also holds for hinge loss. It is possible that it also holds for neural networks with a a softmax layer, since the softmax layer is mathematically equivalent to logistic regression. In fact,

Geifman & El-Yaniv (2017) performs experiments with deep neural networks and multiclass classification on MNIST (1% error, 6x data efficiency), CIFAR-10 (10%, 2x), and CIFAR-100 (35%, 1x) and finds results that are explained well by our conclusion.

In conclusion, we make an observation, clearly define a phenomenon, demonstrate it empirically, and analyze it theoretically. The thesis of this work, that the data efficiency of uncertainty sampling on logistic regression is inversely proportional to the limiting error, sheds light on the appropriate use of active learning, enabling machine learning practitioners to intelligently choose their data collection techniques, whether active or passive.


The code, data, and experiments for this paper are available on the CodaLab platform at


This research was supported by NSF grant DGE-.


  • Balcan et al. (2007) Balcan, M.-F., Broder, A., and Zhang, T. Margin based active learning. In

    International Conference on Computational Learning Theory

    , pp. 35–50. Springer, 2007.
  • Balcan et al. (2009) Balcan, M.-F., Beygelzimer, A., and Langford, J. Agnostic active learning. Journal of Computer and System Sciences, 75(1):78–89, 2009.
  • Brinker (2003) Brinker, K.

    Incorporating diversity in active learning with support vector machines.

    In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 59–66, 2003.
  • Chaudhuri et al. (2015) Chaudhuri, K., Kakade, S. M., Netrapalli, P., and Sanghavi, S. Convergence rates of active learning for maximum likelihood estimation. In Advances in Neural Information Processing Systems, pp. 1090–1098, 2015.
  • Freund et al. (1997) Freund, Y., Seung, H. S., Shamir, E., and Tishby, N. Selective sampling using the query by committee algorithm. Machine learning, 28(2):133–168, 1997.
  • Geifman & El-Yaniv (2017) Geifman, Y. and El-Yaniv, R. Deep active learning over the long tail. arXiv preprint arXiv:1711.00941, 2017.
  • Hanneke (2014) Hanneke, S. Statistical Theory of Active Learning. Now Publishers Incorporated, 2014.
  • Hoi et al. (2009) Hoi, S. C., Jin, R., Zhu, J., and Lyu, M. R.

    Semisupervised svm batch mode active learning with applications to image retrieval.

    ACM Transactions on Information Systems (TOIS), 27(3):16, 2009.
  • Lewis & Gale (1994) Lewis, D. D. and Gale, W. A. A sequential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 3–12. Springer-Verlag New York, Inc., 1994.
  • Liu et al. (2015) Liu, A., Reyzin, L., and Ziebart, B. D. Shift-pessimistic active learning using robust bias-aware prediction. In AAAI, pp. 2764–2770, 2015.
  • Roy & McCallum (2001) Roy, N. and McCallum, A. Toward optimal active learning through monte carlo estimation of error reduction. ICML, Williamstown, pp. 441–448, 2001.
  • Schein & Ungar (2007) Schein, A. I. and Ungar, L. H. Active learning for logistic regression: an evaluation. Machine Learning, 68(3):235–265, 2007.
  • Schohn & Cohn (2000) Schohn, G. and Cohn, D. Less is more: Active learning with support vector machines. In ICML, pp. 839–846, 2000.
  • Settles (2010) Settles, B. Active learning literature survey. Computer Sciences Technical Report, 1648, 2010.
  • Seung et al. (1992) Seung, H. S., Opper, M., and Sompolinsky, H. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory, pp. 287–294. ACM, 1992.
  • Sourati et al. (2017) Sourati, J., Akcakaya, M., Leen, T. K., Erdogmus, D., and Dy, J. G. Asymptotic analysis of objectives based on fisher information in active learning. Journal of Machine Learning Research, 18(34):1–41, 2017.
  • Tong & Koller (2001) Tong, S. and Koller, D. Support vector machine active learning with applications to text classification. Journal of machine learning research, 2(Nov):45–66, 2001.
  • van der Vaart (1998) van der Vaart, A. W. Asymptotic statistics. Cambridge University Press, 1998.
  • Yang & Loog (2016) Yang, Y. and Loog, M. A benchmark and comparison of active learning for logistic regression. arXiv preprint arXiv:1611.08618, 2016.

6 Appendix

6.1 Notation

is the weights after the seed round.

is the matrix without the first row and column. is the vector from the first row and all columns except the first column.

Generally, the notation hides constants that only depend on the dataset, such as , , , etc.

For the order of things going to zero, we first choose to be small, then to be small, then to be large.

is weight vector after seed round

Without loss of generality, assume , , and .

With an abuse of notation, let .

6.2 Losses

Define .

The loss (negative log-likelihood) for a single data point under logistic regression is

and so the gradient is

and the Hessian is

Note that .

6.3 Decision Boundary

Lemma 6.1.

For sufficiently small , if , then


Without loss of generality (rotation and translation), let , and let .

We sample from places where which occurs when . From the theorem assumption, we know that and (for sufficiently small ) so we know that

Note that

(Note that the Jacobian of the change of variables has the following matrix which has determinant )

With the assumption that the conditional probabilities are Lipschitz,

Lemma 6.2.

For sufficiently small , if , then with probability going to exponentially fast, all points from two-stage uncertainty sampling are from some hyperplane such that .


For small enough , then from the above lemma if . Thus, the probability of an unlabeled point within the parallel plane with bias less than different from such that is at least (for sufficiently small ).

Recall that and .

For sufficiently large , the probability of at least points from the unlabeled points falling in this range is

for some constant .

We can use a Chernoff bound (standard with ) since to bound by . Thus the probability that the planes we choose from are farther than away from goes to with rate faster than . ∎

6.4 Convergence

Lemma 4.2.

Both two-stage uncertainty sampling and random sampling converge to .


For passive learning, the Hessian of the population loss is positive definite because the data covariance is non-singular (Assumption 8). Thus, the population loss has a unique optimum. By the definition of , is the minimizer. Since the sample loss converges to the population loss, the result of passive learning converges to .

By a similar argument, the weight vector after the seed round converges to since is super-constant (Assumption 2). Thus, for any , with probability converging to as , . By Lemma 6.2, with probability going to , all points selected are from hyperplanes where . Thus, by Assumption 5, . In the second stage, because of the proportion of randomly selected points, the loss from the new uncertainty sampling population has a unique optimum. And because the expectation of the gradient of the loss is for the points near the decision boundary (with probability going to ), the result of two-stage uncertainty sampling converges in probability to . ∎

6.5 Rates

Lemma .

If exists, and for any , and , then there exist vectors that depend only on the data distribution such that,


The zero-one error is

Since is twice differentiable at , by Taylor’s theorem,

where as .

Since has a local optimum at , . Also . Additionally, denote ,

Choose any . Since as , there is such that . Define to be the event that . Note that from the theorem assumption, .


So we need to just worry about the convergence of the right side,

Because we conditioned on , and and therefore . So . Using this, we get,

Note that,

and the later two expectations exist since the left exists and the matrices are positive semidefinite. Passing through the limit, we see that .

Thus, noting that we can drive ,

Thus, putting this together, we see that

Doing manipulations on the indices, we find,


and we are most of the way there, just need to use some properties to show the final form.

Since is a local optimum, (and symmetric) and since the Hessian is not identically zero at , .

Without loss of generality, let and as assumed before. Note that for . Since it is constant along this line, , and so

So , is symmetric, , and . Since and , for all .

Since and ,