Active Learning from Imperfect Labelers

10/30/2016 ∙ by Songbai Yan, et al. ∙ University of California, San Diego 0

We study active learning where the labeler can not only return incorrect labels but also abstain from labeling. We consider different noise and abstention conditions of the labeler. We propose an algorithm which utilizes abstention responses, and analyze its statistical consistency and query complexity under fairly natural assumptions on the noise and abstention rate of the labeler. This algorithm is adaptive in a sense that it can automatically request less queries with a more informed or less noisy labeler. We couple our algorithm with lower bounds to show that under some technical conditions, it achieves nearly optimal query complexity.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In active learning, the learner is given an input space , a label space , and a hypothesis class such that one of the hypotheses in the class generates ground truth labels. Additionally, the learner has at its disposal a labeler to which it can pose interactive queries about the labels of examples in the input space. Note that the labeler may output a noisy version of the ground truth label (a flipped label). The goal of the learner is to learn a hypothesis in which is close to the hypothesis that generates the ground truth labels.

There has been a significant amount of literature on active learning, both theoretical and practical. Previous theoretical work on active learning has mostly focused on the above basic setting [2, 4, 7, 10, 25] and has developed algorithms under a number of different models of label noise. A handful of exceptions include [3] which allows class conditional queries, [5] which allows requesting counterexamples to current version spaces, and [23, 26]  where the learner has access to a strong labeler and one or more weak labelers.

In this paper, we consider a more general setting where, in addition to providing a possibly noisy label, the labeler can sometimes abstain from labeling. This scenario arises naturally in difficult labeling tasks and has been considered in computer vision by 

[11, 15]. Our goal in this paper is to investigate this problem from a foundational perspective, and explore what kind of conditions are needed, and how an abstaining labeler can affect properties such as consistency and query complexity of active learning algorithms.

The setting of active learning with an abstaining noisy labeler was first considered by [24]

, who looked at learning binary threshold classifiers based on queries to an labeler whose abstention rate is higher closer to the decision boundary. They primarily looked at the case when the abstention rate at a distance

from the decision boundary is less than , and the rate of label flips at the same distance is less than ; under these conditions, they provided an active learning algorithm that given parameters and , outputs a classifier with error using queries to the labeler. However, there are several limitations to this work. The primary limitation is that parameters and need to be known to the algorithm, which is not usually the case in practice. A second major limitation is that even if the labeler has nice properties, such as, the abstention rates increase sharply close to the boundary, their algorithm is unable to exploit these properties to reduce the number of queries. A third and final limitation is that their analysis only applies to one dimensional thresholds, and not to more general decision boundaries.

In this work, we provide an algorithm which is able to exploit nice properties of the labeler. Our algorithm is statistically consistent under very mild conditions — when the abstention rate is non-decreasing as we get closer to the decision boundary. Under slightly stronger conditions as in [24], our algorithm has the same query complexity. However, if the abstention rate of the labeler increases strictly monotonically close to the decision boundary, then our algorithm adapts and does substantially better. It simply exploits the increasing abstention rate close to the decision boundary, and does not even have to rely on the noisy labels! Specifically, when applied to the case where the noise rate is at most and the abstention rate is at distance from the decision boundary, our algorithm can output a classifier with error based on only queries.

An important property of our algorithm is that the improvement of query complexity is achieved in a completely adaptive manner; unlike previous work [24], our algorithm needs no information whatsoever on the abstention rates or rates of label noise. Thus our result also strengthens existing results on active learning from (non-abstaining) noisy labelers by providing an adaptive algorithm that achieves that same performance as [6] without knowledge of noise parameters.

We extend our algorithm so that it applies to any smooth -dimensional decision boundary in a non-parametric setting, not just one-dimensional thresholds, and we complement it with lower bounds on the number of queries that need to be made to any labeler. Our lower bounds generalize the lower bounds in [24], and shows that our upper bounds are nearly optimal. We also present an example that shows that at least a relaxed version of the monotonicity property is necessary to achieve this performance gain; if the abstention rate plateaus around the decision boundary, then our algorithm needs to query and rely on the noisy labels (resulting in higher query complexity) in order to find a hypothesis close to the one generating the ground truth labels.

1.1 Related work

There has been a considerable amount of work on active learning, most of which involves labelers that are not allowed to abstain. Theoretical work on this topic largely falls under two categories — the membership query model [6, 13, 18, 19], where the learner can request label of any example in the instance space, and the PAC model, where the learner is given a large set of unlabeled examples from an underlying unlabeled data distribution, and can request labels of a subset of these examples. Our work and also that of [24] builds on the membership query model.

There has also been a lot of work on active learning under different noise models. The problem is relatively easy when the labeler always provides the ground truth labels – see [8, 9, 12] for work in this setting in the PAC model, and [13]

for the membership query model. Perhaps the simplest setting of label noise is random classification noise, where each label is flipped with a probability that is independent of the unlabeled instance.

[14] shows how to address this kind of noise in the PAC model by repeatedly querying an example until the learner is confident of its label; [18, 19] provide more sophisticated algorithms with better query complexities in the membership query model. A second setting is when the noise rate increases closer to the decision boundary; this setting has been studied under the membership query model by [6] and in the PAC model by [10, 4, 25]. A final setting is agnostic PAC learning — when a fixed but arbitrary fraction of labels may disagree with the label assigned by the optimal hypothesis in the hypothesis class. Active learning is known to be particularly difficult in this setting; however, algorithms and associated label complexity bounds have been provided by [1, 2, 4, 10, 12, 25] among others.

Our work expands on the membership query model, and our abstention and noise models are related to a variant of the Tsybakov noise condition. A setting similar to ours was considered by [6, 24]. [6] considers a non-abstaining labeler, and provides a near-optimal binary search style active learning algorithm; however, their algorithm is non-adaptive. [24] gives a nearly matching lower and upper query complexity bounds for active learning with abstention feedback, but they only give a non-adaptive algorithm for learning one dimensional thresholds, and only study the situation where the abstention rate is upper-bounded by a polynomial function. Besides [24] , [11, 15]

study active learning with abstention feedback in computer vision applications. However, these works are based on heuristics and do not provide any theoretical guarantees.

2 Settings

Notation.

is the indicator function: if is true, and 0 otherwise. For (), denote by . Define , , . We use and to hide logarithmic factors in , , and .

Definition.

Suppose . A function is -Hölder smooth, if it is continuously differentiable up to -th order, and for any , . We denote this class of functions by .

We consider active learning for binary classification. We are given an instance space and a label space . Each instance is assigned to a label by an underlying function unknown to the learning algorithm in a hypothesis space of interest. The learning algorithm has access to any , but no access to their labels. Instead, it can only obtain label information through interactions with a labeler, whose relation to is to be specified later. The objective of the algorithm is to sequentially select the instances to query for label information and output a classifier that is close to while making as few queries as possible.

We consider a non-parametric setting as in [6, 17] where the hypothesis space is the smooth boundary fragment class . In other words, the decision boundaries of classifiers in this class are epigraph of smooth functions (see Figure 3 for example). We assume . When , reduces to the space of threshold functions .

The performance of a classifier is evaluated by the distance between the decision boundaries .

The learning algorithm can only obtain label information by querying a labeler who is allowed to abstain from labeling or return an incorrect label (flipping between 0 and 1). For each query , the labeler will return ( means that the labeler abstains from providing a 0/1 label) according to some distribution . When it is clear from the context, we will drop the subscript from . Note that while the labeler can declare its indecision by outputting , we do not allow classifiers in our hypothesis space to output .

In our active learning setting, our goal is to output a boundary that is close to while making as few interactive queries to the labeler as possible. In particular, we want to find an algorithm with low query complexity , which is defined as the minimum number of queries that Algorithm , acting on samples with ground truth , should make to a labeler to ensure that the output classifier has the property with probability at least over the responses of .

2.1 Conditions

We now introduce three conditions on the response of the labeler with increasing strictness. Later we will provide an algorithm whose query complexity improves with increasing strictness of conditions.

Condition 1.

The response distribution of the labeler satisfies:

  • (abstention) For any , , if then ;

  • (noise) For any , .

Condition 1 means that the closer is to the decision boundary , the more likely the labeler is to abstain from labeling. This complies with the intuition that instances closer to the decision boundary are harder to classify. We also assume the 0/1 labels can be flipped with probability as large as . In other words, we allow unbounded noise.

Condition 2.

Let be non-negative constants, and be a nondecreasing function. The response distribution satisfies:

  • (abstention) ;

  • (noise) .

Condition 2 requires the abstention and noise probabilities to be upper-bounded, and these upper bounds decrease as moves further away from the decision boundary. The abstention rate can be 1 at the decision boundary, so the labeler may always abstain at the decision boundary. The condition on the noise satisfies the popular Tsybakov noise condition [22].

Condition 3.

Let be a nondecreasing function such that , , . The response distribution satisfies: .

An example where Condition 3 holds is ().

Condition 3 requires the abstention rate to increase monotonically close to the decision boundary as in Condition 1. In addition, it requires the abstention probability not to be too flat with respect to . For example, when , for (shown as Figure 3) does not satisfy Condition 3, and abstention responses are not informative since this abstention rate alone yields no information on the location of the decision boundary. In contrast, (shown as Figure 3) satisfies Condition 3, and the learner could infer it is getting close to the decision boundary when it starts receiving more abstention responses.

Note that here are unknown and arbitrary parameters that characterize the complexity of the learning task. We want to design an algorithm that does not require knowledge of these parameters but still achieves nearly optimal query complexity.

Figure 1: A classifier with boundary for . Label 1 is assigned to the region above, 0 to the below (red region)

Figure 2: The distributions above satisfy Conditions  1 and 2, but the abstention feedback is useless since is flat between and 0.4

Figure 3: Distributions above satisfy Conditions  1, 2, and 3.

3 Learning one-dimensional thresholds

In this section, we start with the one dimensional case () to demonstrate the main idea. We will generalize these results to multidimensional instance space in the next section.

When , the decision boundary becomes a point in , and the corresponding classifier is a threshold function over [0,1]. In other words the hypothesis space becomes ). We denote the ground truth decision boundary by . We want to find a such that is small while making as few queries as possible.

3.1 Algorithm

The proposed algorithm is a binary search style algorithm shown as Algorithm 1. (For the sake of simplicity, we assume is an integer.) Algorithm 1 takes a desired precision and confidence level

as its input, and returns an estimation

of the decision boundary . The algorithm maintains an interval in which is believed to lie, and shrinks this interval iteratively. To find the subinterval that contains , Algorithm 1 relies on two auxiliary functions (marked in Procedure 2) to conduct adaptive sequential hypothesis tests regarding subintervals of interval .

1:Input: ,
2:
3:for  do
4:

     Define three quartiles:

, ,
5:      Empty Array
6:     for  do
7:         Query at , and receive labels
8:         for  do
9:               We record whether in , and the 0/1 label (as -1/1) in if
10:              if  then
11:                   ,
12:              else
13:                  
14:              end if
15:         end for
16:          Check if the differences of abstention responses are statistically significant
17:         if CheckSignificant-Var(, then
18:              ; break
19:         else if CheckSignificant-Var(, then
20:              ; break
21:         end if
22:          Check if the differences between 0 and 1 labels are statistically significant
23:         if CheckSignificant(, then
24:              ; break
25:         else if CheckSignificant(, then
26:              ; break
27:         end if
28:     end for
29:end for
30:Output:
Algorithm 1 The active learning algorithm for learning thresholds
1: are absolute constants defined in Proposition 1 and Proposition 2
2:

are i.i.d. random variables bounded by 1.

is the confidence level. Detect if
3:function CheckSignificant()
4:     
5:     Return
6:end function
7:function CheckSignificant-Var()
8:

     Calculate the empirical variance

9:     
10:     Return AND
11:end function
Procedure 2 Adaptive sequential testing

Suppose . Algorithm 1 tries to shrink this interval to a of its length in each iteration by repetitively querying on quartiles , , . To determine which specific subinterval to choose, the algorithm uses 0/1 labels and abstention responses simultaneously. Since the ground truth labels are determined by , one can infer that if the number of queries that return label 0 at () is statistically significantly more (less) than label 1, then should be on the right (left) side of (). Similarly, from Condition 1, if the number of non-abstention responses at () is statistically significantly more than non-abstention responses at , then should be closer to than ().

Algorithm 1 relies on the ability to shrink the search interval via statistically comparing the numbers of obtained labels at locations . As a result, a main building block of Algorithm 1 is to test whether i.i.d. bounded random variables are greater in expectation than i.i.d. bounded random variables with statistical significance. In Procedure 2, we have two test functions CheckSignificant and CheckSignificant-Var that take i.i.d. random variables () and confidence level as their input, and output whether it is statistically significant to conclude .

CheckSignificant is based on the following uniform concentration result regarding the empirical mean:

Proposition 1.

Suppose are a sequence of i.i.d. random variables with , . Take any . Then there is an absolute constant such that with probability at least , for all simultaneously,

In Algorithm 1, we use CheckSignificant to detect whether the expected number of queries that return label 0 at location () is more/less than the expected number of label 1 with a statistical significance.

CheckSignificant-Var is based on the following uniform concentration result which further utilizes the empirical variance :

Proposition 2.

There is an absolute constant such that with probability at least , for all simultaneously,

The use of variance results in a tighter bound when is small.

In Algorithm 1, we use CheckSignificant-Var to detect the statistical significance of the relative order of the number of queries that return non-abstention responses at () compared to the number of non-abstention responses at . This results in a better query complexity than using CheckSignificant under Condition 3, since the variance of the number of abstention responses approaches 0 when the interval zooms in on .111We do not apply CheckSignificant-Var to 0/1 labels, because unlike the difference between the numbers of abstention responses at () and , the variance of the difference between the numbers of 0 and 1 labels stays above a positive constant.

3.2 Analysis

For Algorithm 1 to be statistically consistent, we only need Condition 1.

Theorem 1.

Let be the ground truth. If the labeler satisfies Condition 1 and Algorithm  1 stops to output , then with probability at least .

Under additional Conditions 2 and 3, we can derive upper bounds of the query complexity for our algorithm. (Recall and are defined in Conditions 2 and 3.)

Theorem 2.

Let be the ground truth, and be the output of Algorithm 1. Under Conditions 1 and 2, with probability at least , Algorithm 1 makes at most queries.

Theorem 3.

Let be the ground truth, and be the output of Algorithm 1. Under Conditions 1 and 3, with probability at least , Algorithm 1 makes at most queries.

The query complexity given by Theorem 3 is independent of that decides the flipping rate, and consequently smaller than the bound in Theorem 2. This improvement is due to the use of abstention responses, which become much more informative under Condition 3.

3.3 Lower Bounds

In this subsection, we give lower bounds of query complexity in the one-dimensional case and establish near optimality of Algorithm 1. We will give corresponding lower bounds for the high-dimensional case in the next section.

The lower bound in [24] can be easily generalized to Condition 2:

Theorem 4.

([24]) There is a universal constant and a labeler satisfying Conditions 1 and 2, such that for any active learning algorithm , there is a , such that for small enough , .

Our query complexity (Theorem 3) for the algorithm is also almost tight under Conditions 1 and 3 with a polynomial abstention rate.

Theorem 5.

There is a universal constant and a labeler satisfying Conditions 1, 2, and 3 with ( and are constants), such that for any active learning algorithm , there is a , such that for small enough , .

3.4 Remarks

Our results confirm the intuition that learning with abstention is easier than learning with noisy labels. This is true because a noisy label might mislead the learning algorithm, but an abstention response never does. Our analysis shows, in particular, that if the labeler never abstains, and outputs completely noisy labels with probability bounded by (i.e., ), then the near optimal query complexity of is significantly larger than the near optimal query complexity associated with a labeler who only abstains with probability and never flips a label. More precisely, while in both cases the labeler outputs the same amount of corrupted labels, the query complexity of the abstention-only case is significantly smaller than the noise-only case.

Note that the query complexity of Algorithm 1 consists of two kinds of queries: queries which return 0/1 labels and are used by function CheckSignificant, and queries which return abstention and are used by function CheckSignificant-Var. Algorithm 1 will stop querying when the responses of one of the two kinds of queries are statistically significant. Under Condition 2, our proof actually shows that the optimal number of queries is dominated by the number of queries used by CheckSignificant function. In other words, a simplified variant of Algorithm 1 which excludes use of abstention feedback is near optimal. Similarly, under Condition 3, the optimal query complexity is dominated by the number of queries used by CheckSignificant-Var function. Hence the variant of Algorithm 1 which disregards 0/1 labels would be near optimal.

4 The multidimensional case

We follow [6] to generalize the results from one-dimensional thresholds to the d-dimensional smooth boundary fragment class .

4.1 Lower bounds

Theorem 6.

There are universal constants , , and a labeler satisfying Conditions 1 and 2, such that for any active learning algorithm , there is a , such that for small enough , .

Theorem 7.

There is a universal constant and a labeler satisfying Conditions 1, 2, and Condition 3 with ( and are constants), such that for any active learning algorithm , there is a , such that for small enough , .

4.2 Algorithm and Analysis

Recall the decision boundary of the smooth boundary fragment class can be seen as the epigraph of a smooth function . For , we can reduce the problem to the one-dimensional problem by discretizing the first

dimensions of the instance space and then perform a polynomial interpolation. The algorithm is shown as Algorithm 

3. For the sake of simplicity, we assume , in Algorithm 3 are integers.

1:Input: , ,
2:.
3:For each , apply Algorithm 1 with parameter (, ) to learn a threshold that approximates
4:Partition the instance space into cells indexed by , where
5:For each cell , perform a polynomial interpolation: , where
6:Output:
Algorithm 3 The active learning algorithm for the smooth boundary fragment class

We have similar consistency guarantee and upper bounds as in the one-dimensional case.

Theorem 8.

Let be the ground truth. If the labeler satisfies Condition 1 and Algorithm 3 stops to output , then with probability at least .

Theorem 9.

Let be the ground truth, and be the output of Algorithm 3. Under Conditions 1 and 2, with probability at least , Algorithm 3 makes at most queries.

Theorem 10.

Let be the ground truth, and be the output of Algorithm 3. Under Conditions 1 and 3, with probability at least , Algorithm 3 makes at most queries.

Acknowledgments.

We thank NSF under IIS-1162581, CCF-1513883, and CNS-1329819 for research support.

References

  • [1] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave distributions. In COLT, 2013.
  • [2] Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In

    Proceedings of the 23rd international conference on Machine learning

    , pages 65–72. ACM, 2006.
  • [3] Maria-Florina Balcan and Steve Hanneke. Robust interactive learning. In Proceedings of The 25th Conference on Learning Theory, 2012.
  • [4] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010.
  • [5] Alina Beygelzimer, Daniel Hsu, John Langford, and Chicheng Zhang. Search improves label for active learning. arXiv preprint arXiv:1602.07265, 2016.
  • [6] Rui M. Castro and Robert D. Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5):2339–2353, 2008.
  • [7] Yuxin Chen, S Hamed Hassani, Amin Karbasi, and Andreas Krause. Sequential information maximization: When is greedy near-optimal? In Proceedings of The 28th Conference on Learning Theory, pages 338–363, 2015.
  • [8] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning. Machine Learning, 15(2), 1994.
  • [9] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005.
  • [10] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In NIPS, 2007.
  • [11] Meng Fang and Xingquan Zhu. I don’t know the label: Active learning with blind knowledge. In Pattern Recognition (ICPR), 2012 21st International Conference on, pages 2238–2241. IEEE, 2012.
  • [12] Steve Hanneke. Teaching dimension and the complexity of active learning. In Learning Theory, pages 66–81. Springer, 2007.
  • [13] Tibor Hegedűs. Generalized teaching dimensions and the query complexity of learning. In

    Proceedings of the eighth annual conference on Computational learning theory

    , pages 108–117. ACM, 1995.
  • [14] M. Kääriäinen. Active learning in the non-realizable case. In ALT, 2006.
  • [15] Christoph Kading, Alexander Freytag, Erik Rodner, Paul Bodesheim, and Joachim Denzler. Active learning and discovery of object categories in the presence of unnameable instances. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on, pages 4343–4352. IEEE, 2015.
  • [16] Yuan-Chuan Li and Cheh-Chih Yeh. Some equivalent forms of bernoulli’s inequality: A survey. Applied Mathematics, 4(07):1070, 2013.
  • [17] Stanislav Minsker. Plug-in approach to active learning. Journal of Machine Learning Research, 13(Jan):67–90, 2012.
  • [18] Mohammad Naghshvar, Tara Javidi, and Kamalika Chaudhuri. Bayesian active learning with non-persistent noise. IEEE Transactions on Information Theory, 61(7):4080–4098, 2015.
  • [19] R. D. Nowak. The geometry of generalized binary search. IEEE Transactions on Information Theory, 57(12):7893–7906, 2011.
  • [20] Maxim Raginsky and Alexander Rakhlin. Lower bounds for passive and active learning. In Advances in Neural Information Processing Systems, pages 1026–1034, 2011.
  • [21] Aaditya Ramdas and Akshay Balsubramani. Sequential nonparametric testing with the law of the iterated logarithm. In

    Proceedings of the Conference on Uncertainty in Artificial Intelligence

    , 2016.
  • [22] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32:135–166, 2004.
  • [23] Ruth Urner, Shai Ben-david, and Ohad Shamir. Learning from weak teachers. In International Conference on Artificial Intelligence and Statistics, pages 1252–1260, 2012.
  • [24] Songbai Yan, Kamalika Chaudhuri, and Tara Javidi. Active learning from noisy and abstention feedback. In Communication, Control, and Computing (Allerton), 2015 53th Annual Allerton Conference on. IEEE, 2015.
  • [25] Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems, pages 442–450, 2014.
  • [26] Chicheng Zhang and Kamalika Chaudhuri. Active learning from weak and strong labelers. In Advances in Neural Information Processing Systems, pages 703–711, 2015.

Appendix A Proof of query complexities

a.1 Properties of adaptive sequential testing in Procedure 2

Lemma 1.

Suppose is a sequence of i.i.d. random variables such that , . Let . Then with probability at least , for all simultaneously CheckSignificant in Procedure 2 returns false.

Proof.

This is immediate by applying Proposition 1 to . ∎

Lemma 2.

Suppose is a sequence of i.i.d. random variables such that , . Let , ( is an absolute constant specified in the proof). Then with probability at least , CheckSignificant in Procedure 2 returns true.

Proof.

Let . CheckSignificant returns false if and only if
.

Suppose for constant and . is set to be sufficiently large, such that (1) ; (2) ; (3) is decreasing when . Here (2) is satisfiable since as , (3) is satisfiable since as . (2) and (3) together implies .

Since and , we have .

Since if , we have , and thus

where (a) follows by if , and (b) follows by if .

Thus, we have

(c) follows by , , and if . (d) follows by our choose of .

Therefore,

which is at most by Hoeffding Bound. ∎

Lemma 3.

Suppose is a sequence of i.i.d. random variables such that , . Let . Then with probability at least , for all simultaneously CheckSignificant-Var in Procedure 2 returns false.

Proof.

Define . It is easy to check . The result is immediate from Proposition 2. ∎

Lemma 4.

Suppose is a sequence of i.i.d. random variables such that , , where , . Let , ( is a constant specified in the proof). Then with probability at least , CheckSignificant-Var in Procedure 2 returns true.

Proof.

Let , be the constant in Lemma 14. Set .

CheckSignificant-Var returns false if and only if .

By applying Lemma 14 to ,