Analysis of a Design Pattern for Teaching with Features and Labels

11/18/2016 ∙ by Christopher Meek, et al. ∙ 0

We study the task of teaching a machine to classify objects using features and labels. We introduce the Error-Driven-Featuring design pattern for teaching using features and labels in which a teacher prefers to introduce features only if they are needed. We analyze the potential risks and benefits of this teaching pattern through the use of teaching protocols, illustrative examples, and by providing bounds on the effort required for an optimal machine teacher using a linear learning algorithm, the most commonly used type of learners in interactive machine learning systems. Our analysis provides a deeper understanding of potential trade-offs of using different learning algorithms and between the effort required for featuring (creating new features) and labeling (providing labels for objects).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Featuring and labeling are critical parts of the interactive machine learning process in which a person and a machine learning algorithm coordinate to build a predictive system (a classifier, entity extractor, etc.). Unlike the case of using labels alone, little is known about how to quantify the effort required to teach a machine using both features and labels. In this paper, we consider the problem of teaching a machine how to classify objects when the teacher can provide labels for objects and provide features—functions from objects to values. Our aim is to understand the effort required by a teacher to find a suitable representation for objects, to teach the target classification function, and to provide guidance to teachers about how to provide features and labels when teaching.

Similar to previous work on active learning and teaching dimension, we take an idealized view of the cost of labeling and featuring. In particular, we ignore variability in the effort required for these respective actions. In addition, similar to the work on teaching dimension, we assume an idealized teacher with complete knowledge about the learner, target classification function and the range of possible objects that we want to classify.

We analyze the effort required to teach a classification function relative to a given set of feature functions. This set of features functions can be thought of as a set of teachable functions. There are several observations that motivate us to quantify teaching effort relative to a set of feature functions. It is natural to expect that the available set of teachable functions depends on the specific learner that we are teaching and the types of objects that we want to classify (e.g., images versus text documents). In addition, the teaching effort required to teach a learner is heavily dependent on the available set of functions. For instance, if the teacher could directly teach the learner the target classification function then only one function would be required, and, for wide variety of learning algorithms, the teacher would only be require to provide two labeled examples in the case of binary classification. Of course, it is unreasonable to expect that the target classification function can directly be encoded in a feature function and, in fact, if this is possible then we need not use a machine learning algorithm to build the predictor. For these reasons, we assume that there is a set of features that are teachable and define the effort relative to this set of features. In order to capture dependencies among features we consider a lattice of sets of features rather than a set of features. We use the lattice to enforce our assumption that features are taught one at a time and to capture other dependencies such as only allowing features to be taught if all of the constituent features have been taught (e.g., the feature of tall and heavy can only be included in a feature set if the features of being tall and of being heavy have previously been defined). Thus, the lattice of feature sets captures the potential alternative sequences of features that the teacher can use to teach a learner.

We introduce the Error-Driven-Featuring (EDF) design pattern for teaching in which the teacher prefers to add features only if they are needed to fix a prediction error on the training set. In order to analyze the risks and benefits of the EDF teaching pattern we consider two teaching protocols, one which forces the teacher to use the EDF teaching pattern and the other which does not. By quantifying the featuring and labeling effort required by these protocols we can provide a deeper understanding of the risks and benefits of the EDF pattern and potential trade-offs between featuring and labeling more generally. In our analysis we consider two specific learning algorithms; a one-nearest-neighbor classifier and a linear classifier. Using our measures of teaching cost we demonstrate that there are significant risks of adding features for high-capacity learning algorithms (1NN) which can be controlled by using a low-capacity learning algorithm (linear classifier). We also demonstrate that the additional labeling costs associated with using the EDF teaching pattern for both high and low capacity learning algorithms can be bounded. The combination of these results suggest that it would be valuable to empirically evaluate the EFT design pattern for teaching. In analyzing the costs of the Error-Driven-Featuring protocol we provide new results on the hypothesis specific pool-based teaching dimension of linear classifiers and pool-based exclusion dimension of linear classifiers. 111This paper is an extended version of the paper by Meek et al (2016).

Related Work

There has been a a variety of work aimed at understanding the labeling effort required to build classifiers. In this section we briefly review related work. First we note that this work shares a common roots with the work of Meek (2016) but there the focus is on prediction errors rather than teaching effort.

One closely related concept is that of teaching dimension. The primary aim of this work is to quantify the worst case minimal effort to teach a learner one classification function (typically called a concept in this literature) from among a set of alternative classification functions. There is a large body of work aimed at understanding the teaching dimension, refining teaching dimension (e.g., extended, recursive) and the relationship between these and other concepts from learning theory such as the VC-dimension (e.g., Doliwa et al 2014, Balbach 2008, Zilles et al 2011). Our work, rather than attempting to quantify the difficulty of learning among a set of classifications, is aimed at quantifying the effort required to teach any particular classification function and to understand the relationship between adding features and adding labels. The work on teaching dimension abstracts the role of the learner and rather deals directly with hypothesis classes of classification functions. Furthermore, the work on teaching dimension abstracts away the concept of features making it useless for understanding the interplay between learner, featuring and labeling. That said, several of the concepts that we use have been treated previously in this and related literature. For instance, the idea of a concept teaching set is closely related to that of a teaching sequence (Goldman and Kearns 1995) and our optimal concept specification cost is essentially the specification number of a hypothesis (Anthony et al 1992); we add concept to distinguish it from representation specification cost. Other existing concepts include the exclusion dimension (Angluin 1994) and the unique specification dimension (Hedigus 1995) and the certificate size (Hellerstein et al 1996) which are similar to our invalidation cost. In addition, Liu et al (2016) define the teaching dimension of a hypothesis which is equivalent to the specification number and our concept specification cost. They also provide bounds on the concept specification cost for linear classifiers. Their results are related to our Proposition 7 but, unlike our result, assume that the space of objects is dense. In the terms of Zhu (2015), we provide the hypothesis specific teaching dimension for pool-based teaching. For many domains such as image classification, document classification and entity extraction and associated feature sets the assumption of a dense representation is unnatural (e.g., we cannot have a fractional number of words in a document). Like other work on classical teaching dimension, this work does not consider teaching with both labels and features.

The other body of related work is active learning. The aim of this body of work is to develop algorithms to choose which items to label and the quality of an algorithm is measured by the number of labels that are required to obtain a desirable classification function. Thus, given our interest on both labeling and featuring this body of work is perhaps better named “active labeling”. In contrast to the work on teaching dimension where the teacher has access to target classification function, in active learning, the teacher must choose the item to label without knowledge of the target classification function. This makes active learning critical to many practical systems. An excellent survey of research in this area is given by Settles (2012). Not surprisingly, the work on active learning is related to work on teaching dimension (Hanneke 2007).

Features, Labels and Learning Algorithms

In this section, we define features, labels and learning algorithms. These three concepts are the core concepts needed to discuss the cost of teaching a machine to classify objects. Thus, these definitions are the foundation of the remainder of the paper. In addition to providing these definitions, we also describe two properties of learning algorithms related to machine teaching and we describe two specific learning algorithms that are used in the remainder of the paper.

We are interested in building a classifier of objects. We use and to denote particular objects and to denote the set of objects of interest. We use and for particular labels and to denote the space of possible labels. For binary classification . A classification function is a function from to .222Note that, while we call this mapping a classification function, the definition encompasses a broad class of prediction problems including structured prediction, entity extraction, and regression. The set of classification functions is denoted by . We use to denote the target classification function.

Central to this paper are features or functions which map objects to scalar values. A feature (or ) is a function from objects to real numbers (i.e. ). A feature set is a set of features and we use and to denote generic feature sets. The feature set is -dimensional. We use a -dimensional feature set to map an object to a point in . We denote the mapped object using feature set by

where the result is a vector of length

where the entry is the result of applying the feature function in to the object.

We define the potential sequences of teachable features via a lattice of feature sets. Our definition of a feature lattice enforces the restriction that features are taught sequentially. We use to denote the set of all teachable feature functions for a set of objects . A feature lattice for a feature set is a set of finite subsets of (thus ) such that if then either or there is a such that and . We restrict attention to finite sets to capture the fact that teachers can only teach a finite number of features. We note that the feature lattice also allows us to represent constraints on the order in which features can be taught. Such constraints arise naturally. For instance, before teaching the concept of the area of a rectangle one needs to first teach the concepts of length and width (e.g., feature can be added only if both and have been added as features).

These definitions are illustrated in Figure 1.

(a)
(b)
Figure 1: Two example feature lattices each with four feature sets and nine objects. The shape and color of the objects denote the target binary classification. Each rectangular region is associated with a -dimensional feature set and contains a plot of the objects in . The lowest rectangular region in each panel is associated with an empty feature set which maps all objects to the same point. We graphically depict this mapping by overlaying the circles on top of the rectangles.

In order to define a learning algorithm we first define training sets and, because we are considering learning with alternative feature sets, featurized training sets. A training set is a set of labeled examples. We consider only honest training sets, that is, such that it is the case that . We say that the training set has examples if and denote the set of training examples as . A training set is unfeaturized. We use feature sets to create featurized training sets. For -dimensional feature set and an example training set we denote the featurized training set . We call the resulting training set an featurized training set or an featurization of training set .

Now we are prepared to define a learning algorithm. First, a -dimensional learning algorithm is a function that takes a -dimensional feature set and a training set and outputs a function . Thus, the output of a learning algorithm using and training set can be composed with the functions in the feature set to yield a classification function of objects (i.e., ). The hypothesis space of a -dimensional learning algorithm is the image of the function and is denoted by (or if there is no risk of confusion). A classification function is consistent with a training set if it is the case that . A -dimensional learning algorithm is consistent if the learning algorithm outputs a hypothesis consistent with the training set whenever there is a hypothesis in that is consistent with the training set. A vector learning algorithm is a set of -dimensional learning algorithms one for each dimensionality. A consistent vector learning algorithm is one in which each of the -dimensional learning algorithms is consistent. Finally, a (feature-vector) learning algorithm takes a feature set , a training set , and a vector learning algorithm and returns a classification function in . In particular . When the vector learning algorithm is clear from context or we are discussing a generic vector learning algorithm we drop the and write .

One important property of a feature set is whether it is sufficient to teach the target classification function . A feature feature set is sufficient for learner and target classification function if there exists a training set such that .

A natural desiderata of a learning algorithm is that adding a feature to a sufficient feature set should not make it impossible to teach a target classification function. We capture this with the following property of a learning algorithm. We say that a learning algorithm is monotonically sufficient if it is the case that if is sufficient then is sufficient. Many learning algorithms, in fact, have this property.

We distinguish two type of training sets that are central to teaching. First, a training set is a concept teaching set for feature set and learning algorithm if . Second, a training set an invalidation set if there is an example that is not correctly classified by .

The following proposition demonstrates that, for consistent learning algorithms, finding an invalidation set demonstrates that a feature set is not sufficient for the target classification function.

Proposition 1

If learning algorithm is consistent and is an invalidation set for feature set , target concept , and then is not sufficient for and .

Meek (2016) suggests that identifying minimal invalidation sets might be helpful for teachers wanting to identify mislabeling errors and representation errors. In this paper, an invalidation set is an indication of a representation errors because we assume that the labels in the training set are correct implying that there are no mislabeling errors.

In the remainder of the paper we consider two binary classification algorithms (). The first learning algorithm is a consistent one-nearest-neighbor learning algorithm . Our one-nearest-neighbor algorithm is a set of -dimensional one-nearest-neighbor learning algorithms that use a -dimensional feature set to project the training set into . The algorithm identifies the set of closest points and outputs the minimal label value of points in that set. Thus, if there is more than one closest point and their labels disagree then the learned classification will output 0. By construction, this is a consistent learning algorithm.

The second learning algorithm is a linear learning algorithm . Our consistent linear learning algorithm is a set of

-dimensional linear learning algorithms for which the decision surface is defined by a hyperplane in the

or, more formally, by where the hyperplane is defined in terms of weights . We consider the linear learner that produces the maximum margin separating hyperplane for a training set when one exists and outputs the constant zero function otherwise. Note that the maximum margin separating hyperplane for a training set is the separating hyperplane that maximizes the minimum distance between points in the training set and the hyperplane. again, by construction, this is a consistent learning algorithm.

Note that we say that a feature set is linearly sufficient for the target classification function if is sufficient for the target classification function when using a consistent linear learning algorithm.

We finish this section with the following proposition that demonstrates our learning algorithms are both monotonically sufficient.

Proposition 2

The learning algorithms and are monotonically sufficient.

Teaching Patterns, Protocols and Costs

In this section, we introduce our Error-Drive-Featuring (EDF) design pattern for teaching and two teaching protocols. We introduce the teaching protocols as a means to study the risks and benefits of our EDF teaching pattern.

Teaching patterns are related to design patterns (Gamma et al 1995). Whereas design patterns for programming are formalized best practices that a programmer can use to design software solutions to common problems, a design pattern for teaching (or teaching pattern) is a formalized best practice that a teacher can use to teach a computer.

We use a pair of teaching protocols to study the risks and benefits of our EDF teaching pattern. A teaching protocol is an algorithmic description of a method by which a teacher teaches a learner. In order to study a teaching pattern, in one protocol, we force the teacher to follow the teaching pattern and, in the other, we allow the teacher full control over their actions.

We contrast our teaching protocols by comparing the optimal teaching costs and, in a subsequent section, bounds on optimal teaching costs. To facilitate the discussion of optimal teaching costs we next define several teaching costs associated with a feature set.

Optimal Feature Set Teaching Costs

Next we define a set of costs for a feature set. The first measure is a measure of the cost of specifying the feature set. We measure the representation specification cost of a feature set by the cardinality of the feature set . This idealized measure does not differentiate the effort required to specify features. In practice, different features might require different effort to specify and the cost to specify different features will depend upon the interface through which features are communicated to the learner.

The second measure of a feature set is a measure of the cost of specifying a target classification function using the feature set and a given learning algorithm. We measure the optimal concept specification cost by the size of the minimal concept teaching set for using learner if is sufficient and to be infinite otherwise.

The third measure of a feature set is a measure of the cost of demonstrating that the feature set is not sufficient for a given learning algorithm. We measure the optimal invalidation cost of a feature set using learner by the size of the minimal invalidation set if is not sufficient and infinite otherwise.

We define the optimal feature set cost vector for a feature set and learning algorithm . The feature set cost vector is of length three where the first component is the feature specification cost, the second component is the optimal concept specification cost and the third component is the optimal invalidation cost.

Consider the feature set in Figure (a)a. The training set with three objects , is a minimal concept teaching set for and a minimal invalidation set for . Thus, we can now specify the optimal feature set costs for : the representation specification cost is , the optimal concept specification cost is , the optimal invalidation cost is (i.e., ). The optimal feature set cost vectors for other feature sets are shown in Table (a)a.

Analysis of Teaching Protocols

Algorithm 1 Open-Featuring   Input learning algorithm , set of objects , a feature lattice , and target classification function .                // training set                // feature set   ;   while  such that  do       Choose-action;      if (Action == ”Add-feature”) then                   Add-feature(PossFeat);                  ;      else          Add-example()         ;         ;      end if   end while   return c;      Algorithm 2 Error-Driven-Featuring   Input learning algorithm , set of objects , a feature lattice , and target classification function .                // training set                // feature set   ;   while  such that  do       Add-example();      ;      ;      while ( such that do                   Add-feature(PossFeat);                  c = ;      end while   end while   return c;
Figure 2: Algorithms for two teaching protocols; Open-Featuring and Error-Driven-Featuring.

Figure 2 describes two teaching protocols. In Algorithm 1, the teacher is able to choose whether to add a feature or to add a labeled example. Because the teacher can choose when to add a feature and when to add a labeled example (i.e., the teacher implements the Choose-action function) we call this teaching protocol the Open-Featuring protocol. When adding a feature (the Add-feature function), the teacher selects one of the features that can be taught given the feature lattice and the teaching protocol adds the feature to the current feature set and retrain the current classifier. When adding a label (the Add-example function), the teacher chooses which labeled example to add to the current training set and the teaching protocol adds the example to the training set and retrains the current classifier.

In Algorithm 2, the teacher can only add a feature if there is a prediction error in the training set. From Proposition 1, if we are using a consistent learner we know that this implies that the feature set is not sufficient and indicates the need to add additional features. Note this assumes that the teacher provides correct labels. For a related but alternative teaching protocol that allows for mislabeling errors see Meek (2016). In this protocol, if the current feature set is not sufficient, a teacher adds labeled examples to find an invalidation set which then enables them to add a feature to improve the feature representation. This process of creating invalidation sets continues until a sufficient feature set is identified. An ideal teacher under this protocol would want to minimize the effort to invalidate feature sets that are not sufficient. The cost of doing this for a particular feature set can be measured by the invalidation cost. There is a possibility that one can reuse examples from the invalidation sets of previously visited smaller feature sets, but the sum of the invalidation costs along paths in the feature lattice provides an upper bound on the cost of discovering sufficient feature sets.

Given these two protocols is natural to compare costs by the number of features added and the number of labeled examples that are added in defining the classifier. We can then associate a teaching cost with each feature set in the feature lattice . The teaching cost is also a function of the learning algorithm, and the featuring protocol (Open or Error-driven). The optimal teaching costs for and for different feature sets is given in Table (b)b. An infinite label cost indicates that the feature set cannot be used to teach the target classification function using that protocol and learning algorithm. Because our teaching cost has two components, we would need to choose method to combine these two quantities in order to discuss optimal teaching policies. Once the teacher has provided the learner a feature set that is sufficient the teacher needs to teach the concept represented by the classification function. The labeling cost required to do this is captured by the concept specification cost.

Feat. Set
(0,,2) (0,,2)
(1,3,) (1,,3)
(2,7,) (2,,3)
(2,2,) (2,2,)
(0,,2) (0,,2)
(1,9,) (1,,3)
(1,8,) (1,,3)
(2,9,) (2,3,)
(a) Subtable 1 list of tables text
Open Error-Driven
Feat. Set
(0,) (0,) (0,) (0,)
(1,3) (1,) (1,3) (1,)
(2,7) (2,) (2,7) (2,)
(2,2) (2,3) (2,2) (2,3)
(0,) (0,) (0,,) (0,)
(1,9) (1,) (1,9) (1,)
(1,8) (1,) (1,8) (1,)
(2,9) (2,2) (2,) (2,3)
(b) Subtable 1 list of tables text
Table 1: Optimal feature set costs and optimal teaching costs for all of the feature sets from Figure 1.

The Open-Featuring protocol affords the teacher more flexibility than the Error-Driven-Featuring protocol. In particular, assuming that the teacher is an ideal teacher then there would be no reason to prefer the Error-Driven-Featuring protocol. If, however, the teacher is not an ideal teacher, one not always able to identify features that improve the representations or one who benefits from inspecting an invalidation set to identify features, then one might prefer the Error-Driven-Featuring protocol. In particular, this is a possibility that adding a poor feature can increase the labeling cost. For instance, when using , a poor teacher who has taught the learner to use feature might add feature rather than feature significantly increasing the concept specification cost. In the next section we demonstrate that there is, in fact, unbounded risk for .

One of the short-comings of the Error-Drive-Featuring protocol is that, once the feature set is sufficient the teacher cannot add another feature. For instance, for the example in Figure (a)a, and are inaccessible. This might mean that representations that have lower concept specification costs cannot be used to teach . For instance, has a concept specification cost of 2 whereas the concept specification cost of is 3. While this difference is not large, it is easy to create an example where the costs differ significantly. In contrast, using the Open-Featuring protocol, a teacher can choose to teach either or trading of the costs of adding features and concept specification (adding labels).

The use of the Error-Driven-Featuring protocol can mitigate the risk of poor featuring but, as discussed above, does come with potential costs. An alternative approach to mitigating the risks of featuring is to use a different learning algorithm. If we use , the potential for a increasing the cost for concept specification is when adding a feature is significantly limited. This is discussed in more detail in the next section.

Bounding Optimal Teaching Cost and Feature Set Costs

In this section, we provide bounds on the optimal feature set teaching costs and optimal teaching costs for and with the teaching protocols defined in Section Teaching Patterns, Protocols and Costs. In this section, we assume that there is a finite set of realizable objects (i.e., ).

Bounding Optimal Feature Set Costs

We provide a set of propositions each of which provides tight bounds for optimal concept specification costs and optimal invalidation costs for and . These propositions are presented in Table 2 with their full statements with proofs presented in the full paper.

The fact that the optimal concept specification cost is unbounded as a function of the size of the feature set for is due to the fact that the 1NN classifier is of high capacity. Proposition 7, however, bounds the potential increase in effort required to define the concept when adding a feature for . It is important to note that optimal concept specification cost for can be just two labeled objects but not in general. In fact, one can construct for , a set of objects and a feature set of size that requires objects to specify a linear hyperplane that generalizes to all of the objects.

Similar to the bound on the optimal concept specification cost, the bound optimal invalidation cost for (Proposition 9) is tight. This can be demonstrated by constructing, for a set of labeled objects in such that any subset of the labeled objects is linearly separable. While Proposition 9 does provide a bound on the invalidation cost , this bound for is larger than that provided by Proposition 8. We suspect, however, that in practice, the invalidation cost for the linear classifier would typically be far less then for non-trivial .

Algorithm Concept Spec. Cost Invalidation Cost
(Proposition 7) (Proposition 9)
unbounded (Proposition 6) =2 (Proposition 8)
Table 2: Summary of propositions bounding the optimal invalidation cost and optimal concept specification cost for a feature set using and .

Bounding Teaching Costs

In this section we consider bounding the cost of teaching a target classification function using learning algorithms and .

First we consider . Due to Proposition 6, we cannot bound the risk of adding a bad feature and thus cannot bound the teaching costs for our teaching protocols. We can, however, provide bounds for our teaching protocols using . The following proposition provides and upper bound on the teaching cost for a feature set.

Proposition 3

The labeling cost for a sufficient feature set using an optimal teacher and the Open-Featuring protocol with learning algorithm is .

For the Error-driven-featuring protocol the computation of cost is more difficult as we need to account for the cost of invalidating feature sets. Proposition 4 demonstrates a useful connection between the invalidation sets for nested feature sets when using a linear classifier.

Proposition 4

If is an invalidation set for , target classification function and a consistent linear learner then is an invalidation set for .

Finally, the following proposition provides an upper bound on the teaching cost for a feature set for the learning algorithm .

Proposition 5

The labeling cost for a minimal sufficient feature set using an optimal teacher and the Error-Driven-Featuring protocol with learning algorithm is .

References

  • Angluin (2004) Angluin, D. 2004. Queries revisited. Theor. Comput. Sci. 313(2):175–194.
  • Anthony et al. (1992) Anthony, M.; Brightwell, G.; Cohen, D.; and Shawe-Taylor, J. 1992. On exact specification by examples. In

    Proceedings of the Fifth Annual Workshop on Computational Learning Theory

    , COLT ’92, 311–318.
    New York, NY, USA: ACM.
  • Balbach (2008) Balbach, F. J. 2008. Measuring teachability using variants of the teaching dimension. Theor. Comput. Sci. 397(1-3):94–113.
  • Doliwa et al. (2014) Doliwa, T.; Fan, G.; Simon, H. U.; and Zilles, S. 2014. Recursive teaching dimension, VC-dimension and sample compression. Journal of Machine Learning Research 15:3107–3131.
  • Gamma et al. (1995) Gamma, E.; Helm, R.; Johnson, R.; and Vlissides, J. M. 1995. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley.
  • Goldman and Kearns (1995) Goldman, S., and Kearns, M. 1995. On the complexity of teaching. Journal of Computer and Systems Sciences 50(1):20–31.
  • Hanneke (2007) Hanneke, S. 2007. Teaching dimension and the complexity of active learning. In Proceedings of the 20th Annual Conference on Computational Learning Theory (COLT), 66–81.
  • Hegedűs (1995) Hegedűs, T. 1995. Generalized teaching dimensions and the query complexity of learning. In Proceedings of the Eighth Annual Conference on Computational Learning Theory, COLT ’95, 108–117. New York, NY, USA: ACM.
  • Hellerstein et al. (1996) Hellerstein, L.; Pillaipakkamnatt, K.; Raghavan, V.; and Wilkins, D. 1996. How many queries are needed to learn? J. ACM 43(5):840–862.
  • Liu, Zhu, and Ohannessian (2016) Liu, J.; Zhu, X.; and Ohannessian, H. 2016. The teaching dimension of linear learners. In Proceedings of The 33rd International Conference on Machine Learning, ICML ’16, 117–126.
  • Meek, Simard, and Zhu (2016) Meek, C.; Simard, P.; and Zhu, X. 2016. Analysis of a design pattern for teaching with features and labels. In NIPS 2016 Future of Interactive Machine Learning Workshop.
  • Meek (2016) Meek, C. 2016. A characterization of prediction errors. ArXiv.
  • Settles (2012) Settles, B. 2012. Active Learning.

    Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool.

  • Zhu (2015) Zhu, X. 2015. Machine teaching: an inverse problem to machine learning and an approach toward optimal education. AAAI.
  • Zilles et al. (2011) Zilles, S.; Lange, S.; Holte, R.; and Zinkevich, M. 2011. Models of cooperative teaching and learning. Journal of Machine Learning Research 12:349–384.

Appendix

In this section we provide proofs for Propositions. Several proofs rely on convex geometry and we assume that the reader is familiar with basic concepts and elementary results from convex geometry. We denote the convex closure of a set of points by .

Proposition 1 If learning algorithm is consistent and is an invalidation set for feature set , target concept , and then is not sufficient for and .

Proof Let be an invalidation set for , target concept and consistent learning algorithm . Aiming for a contradiction, we assume that is sufficient for and . From the fact that is sufficient for target concept and learning algorithm then there exists a training set such that . This implies that there is a classification function in the hypothesis class of the learning algorithm that is consistent with any (honest) training set including . This fact and the fact that is an invalidation set implies is not consistent and we have a contradiction. It follows that is not sufficient.

Proposition 2 The learning algorithms and are monotonically sufficient.

Proof For we simply node that adding features makes more distinctions between objects thus once sufficient any superset will remain sufficient.

For , let -dimensional feature set be sufficient for the target classification function. This means that for and such that . For if we use an offset and a weight vector this agrees with for any feature and is zero otherwise is equivalent to the classifier defined by (i.e., ) which proves the claim.

Lemma 1

If finite sets that are strictly separable then there exists a subset such that and the maximum margin separating hyperplane defined by and separates and .

Proof We define the set of points that are the closest points in the convex closure of and (i.e., ). The maximum margin hyperplane defined by any two points suffice to define a hyperplane that separate (see, e.g., Liu et al 2016). Consider a pair . Due the the construction of the set it must be the case that belongs to some face of and similarly belongs to some face of . In fact, the points are a subset of the Cartesian product a face of and a face of that share one or more points that are equidistant.

Next we choose a subset of on the basis of the faces to which each of the pair of points belongs. Let be Euclidean dimension of the minimal face of containing or be if is not in a face of . We define the minimal closest pairs (a subset of ) to be pairs whose summed face Euclidean dimension is minimal (i.e, implies

Let . Next we establish that . Suppose this is not the case, that is, , and . In this case, consider the dimensional ball of variation around and the dimensional ball of variation around . Because , there must be a parallel direction of variation. Rays in this direction starting at and define pairs of points in . Following this common direction of variation from both and we must either hit a lower dimensional face of or which implies that . We have a contradiction and thus .

Finally, if then by applying Carathéodory’s theorem twice we can represent via point and via and thus points suffice to define a separating hyperplane for using a maximum margin hyperplane.

Proposition 4 If is an invalidation set for , target classification function and a consistent linear learner then is an invalidation set for .

Proof Let be an invalidation set for ,, and consistent linear learner . Suppose that is not an invalidation set for . In this case, there are parameters such that is consistent with . This means that there are parameters such that is consistent with and thus is not an invalidation set for which is a contradiction. Thus must be an invalidation set for proving the proposition.

Proposition 3 The labeling cost for a sufficient feature set using an optimal teacher and the Open-featuring protocol with learning algorithm is upper-bounded by .

Proof Follows immediately from Proposition 7.

Proposition 5 The labeling cost for a minimal sufficient feature set using an optimal teacher and the Error-driven-featuring protocol with learning algorithm is upper-bounded by .

Proof Consider the ideal teacher that first provides labels to invalidate subsets of along some path to in the feature lattice and then provides labels to teach the classification function. Because is minimally sufficient consider any subset such that and . is not sufficient and by Proposition 9 there is an invalidation set of size . Due to Proposition 4 this invalidation set is an invalidation set for all feature sets along paths in to and thus the examples in this set are sufficient to allow the teacher to add the features in . In the second phase, the teacher, by Proposition 7 need only provide at most additional labels to create a concept specification set. Thus, in the two phases, the optimal teacher need provide at most labeled examples.

Proposition 6

Adding a single feature to a feature set can increase the concept specification cost variability (by ) when using the 1NN learning algorithm.

Proof The example configuration used in the feature set from the example from Figure (a)a can be extended to arbitrarily many points.

Proposition 7

For any consistent linear learner, if a -dimensional feature set is linearly sufficient for the target classification function then the concept specification cost is at most .

Proof Let be our set of objects and be our target classification function. Define and and and . Because is linearly sufficient then there exists a hyperplane separating the positive an negative examples . We then apply Lemma 1 using and to obtain the desired result.

Proposition 8 (Meek 2016)

If is not sufficient for the target classification function using learning algorithm then the invalidation cost for feature set and is two.

Proposition 9 (Meek 2016)

For any consistent linear learner, if -dimensional feature set is not linearly sufficient for the target classification function then the representation invalidation cost is at most .