# Interactive Learning from Multiple Noisy Labels

Interactive learning is a process in which a machine learning algorithm is provided with meaningful, well-chosen examples as opposed to randomly chosen examples typical in standard supervised learning. In this paper, we propose a new method for interactive learning from multiple noisy labels where we exploit the disagreement among annotators to quantify the easiness (or meaningfulness) of an example. We demonstrate the usefulness of this method in estimating the parameters of a latent variable classification model, and conduct experimental analyses on a range of synthetic and benchmark datasets. Furthermore, we theoretically analyze the performance of perceptron in this interactive learning framework.

## Authors

• 8 publications
• 5 publications
• ### Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating Back-Propagation for Saliency Detection

In this paper, we propose a noise-aware encoder-decoder framework to dis...
07/23/2020 ∙ by Jing Zhang, et al. ∙ 15

• ### Combating noisy labels by agreement: A joint training method with co-regularization

Deep Learning with noisy labels is a practically challenging problem in ...
03/05/2020 ∙ by Hongxin Wei, et al. ∙ 0

• ### Certainty-Driven Consistency Loss for Semi-supervised Learning

The recently proposed semi-supervised learning methods exploit consisten...
01/17/2019 ∙ by Yingting Li, et al. ∙ 0

• ### Perceptual Visual Interactive Learning

Supervised learning methods are widely used in machine learning. However...
10/25/2018 ∙ by Shenglan Liu, et al. ∙ 0

• ### Learning General Latent-Variable Graphical Models with Predictive Belief Propagation and Hilbert Space Embeddings

In this paper, we propose a new algorithm for learning general latent-va...
12/06/2017 ∙ by Borui Wang, et al. ∙ 0

• ### Diameter-based Interactive Structure Search

In this work, we introduce interactive structure search, a generic frame...
06/05/2019 ∙ by Christopher Tosh, et al. ∙ 4

• ### Synbols: Probing Learning Algorithms with Synthetic Datasets

Progress in the field of machine learning has been fueled by the introdu...
09/14/2020 ∙ by Alexandre Lacoste, et al. ∙ 58

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

We consider binary classification problems in the presence of a teacher, who acts as an intermediary to provide a learning algorithm with meaningful, well-chosen examples. This setting is also known as curriculum learning [1, 2, 3] or self-paced learning [4, 5, 6] in the literature. Existing practical methods [4, 7] that employ such a teacher operate by providing the learning algorithm with easy examples first and then progressively moving on to more difficult examples. Such a strategy is known to improve the generalization ability of the learning algorithm and/or alleviate local minima problems while optimizing non-convex objective functions.

In this work, we propose a new method to quantify the notion of easiness of a training example. Specifically, we consider the setting where examples are labeled by multiple (noisy) annotators [8, 9, 10, 11]

. We use the disagreement among these annotators to determine how easy or difficult the example is. If a majority of annotators provide the same label for an example, then it is reasonable to assume that the training example is easy to classify and that these examples are likely to be located far away from the decision boundary (separating hyperplane). If, on the other hand, there is a strong disagreement among annotators in labeling an example, then we can assume that the example is difficult to classify, meaning it is located near the decision boundary. In the paper by Urner

et al. [12]

, a strong annotator always labels an example according to the true class probability distribution, whereas a weak annotator is likely to err on an example whose neighborhood is comprised of examples from both classes, i.e., whose neighborhood is label heterogeneous. In other words, both strong and weak annotators do not err on examples far away from the decision boundary, but weak annotators are likely to provide incorrect labels near the decision boundary where the neighborhood of an example is heterogeneous in terms of its labels. There are a few other theoretical studies where weak annotators were assumed to err in label heterogeneous regions

[13, 14]. The notion of annotator disagreement also shows up in the multiple teacher selective sampling algorithm of Dekel et al. [15]. This line of research indicates the potential of using annotator disagreement to quantify the easiness of a training example.

To the best of our knowledge, there has not been any work in the literature that investigates the use of annotator disagreement in designing an interactive learning algorithm. We note that a recent paper [16] used annotator disagreement in a different setting, namely as privileged information in the design of classification algorithms. Self-paced learning methods [4, 5, 6] aim at simultaneously estimating the parameters of a (linear) classifier and a parameter for each training example that quantifies its easiness. This results in a non-convex optimization problem that is solved using alternating minimization. Our setting is different as the training example is comprised of not just a single (binary) label but multiple noisy labels provided by a set of annotators, and we use the disagreement among these annotators (which is fixed) to determine how easy or difficult a training example is. We note that it is possible to parameterize the easiness of an example as described in Kumar et al.’s paper [4] in our framework and use it in conjunction with the disagreement among annotators.

Learning from multiple noisy labels [8, 9, 10, 11] has been gaining traction in recent years due to the availability of inexpensive annotators from crowdsourcing websites like Amazon’s Mechanical Turk. These methods typically aim at learning a classifier from multiple noisy labels and in the process also estimate the annotators’ expertise levels. We use one such method [10] as a test bed to demonstrate the usefulness of our interactive learning framework.

### 1.1 Problem Definition and Notation

Let denote the input space. The input to the learning algorithm is a set of examples with corresponding (noisy) labels from annotators denoted by where , for all and . Let denote the annotators’ expertise scores, which is not known to the learning algorithm. A strong annotator will have a score close to one and a weak annotator close to zero. The goal is to learn a classifier

parameterized by a weight vector

, and also estimate the annotators’ expertise scores . In this work, we consider linear models , where denotes the dot-product of input vectors.

## 2 Learning from Multiple Noisy Labels

One of the algorithmic advantages of interactive learning is that it can potentially alleviate local minima problems in latent variable models [4] and also improve the generalization ability of the learning algorithm. A latent variable model that is relevant to our setting of learning from multiple noisy labels is the one proposed by Raykar et al. [10]

to learn from crowdsourced labels. For squared loss function,

111We consider squared loss function to describe our method and in our experiments for the sake of convenience. The method can be naturally extended to the classification model described in Raykar et al.’s paper [10]. Also, we note that it is perfectly valid to minimize squared loss function for classification problems [17]. i.e., regression problems and a linear model,222Although we consider linear models in our exposition, we note that our method can be adapted to accommodate any classification algorithm that can be trained with weighted examples. the weight vector and the annotators’ expertise scores (the latent variable) can be simultaneously estimated using the following iterative updates:

 (1) 1^zℓ=1mm∑i=1(y(ℓ)i−\ip^wxi)2 ,for all ℓ∈{1,…,L} ,

where is the regularization parameter. Intuitively, the updates estimate the score of an annotator based on her performance (measured in terms of squared error) with the current model , and the label of an example is adjusted by taking the weighted average of all its noisy labels from the annotators. In practice, the labels are initialized by taking a majority vote of the noisy labels. The above updates are guaranteed to converge only to a locally optimum solution.

We now use the disagreement among annotators in the regularized risk minimization framework. For each example , we compute the disagreement among annotators as follows:

 di=L∑ℓ=1L∑ℓ′=1(y(ℓ)i−y(ℓ′)i)2 , (2)

and solve a weighted least-squares regression problem:

 ^w=\argminw∈\Rn1mm∑i=1g(di)(\ipwxi−^yi)2+λ∥w∥2 , (3)

where is a monotonically decreasing function of the disagreement among annotators, and iteratively update using:

 1^zℓ=1mm∑i=1g(di)(y(ℓ)i−\ip^wxi)2 ,for all ℓ∈{1,…,L} . (4)

In our experiments, we use . The parameter controls the reweighting of examples. Large values of place a lot of weight on examples with low disagreement among labels, and small values of reweight all the examples (almost) uniformly as shown in Figure 1. The parameter

is a hyperparameter that the user has to tune akin to tuning the regularization parameter.

The optimization problem eq:opt_interactive has a closed-form solution. Let denote the matrix of inputs, denote a diagonal matrix whose diagonal entries are , for all and denote the (column) vector of labels. The solution is given by: , where

is the identity matrix. Hence, optimization solvers used to estimate the parameters in regularized least-squares regression can be adapted to solve this problem by a simple rescaling of inputs via

and .

In the above description of the algorithm, we fixed the weights on the examples. Ideally, we would want to reweight the examples uniformly as learning progresses. This can be done in the following way. Let denote some probability distribution induced on the examples via . In every iteration of the learning algorithm, we pick one of

or the uniform distribution based on a Bernoulli trial with success probability

for some fixed positive integer to ensure that the distribution on examples converges to a uniform distribution as learning progresses. Unfortunately, we did not find this to work well in practice and the parameters of the optimization problem did not converge as smoothly as when fixed weights were used throughout the learning process. We leave this as an open question and use fixed weights in our experiments.

## 3 Mistake Bound Analysis

In this section, we analyze the mistake bound of perceptron operating in the interactive learning framework. The algorithm is similar to the classical perceptron, but the training examples are sorted based on their distances from the separating hyperplane and fed to the perceptron starting from the farthest example. The theoretical analysis requires estimates of margins of all examples. We describe a method to estimate the margin of an example and also its ground-truth label (from the multiple noisy labels) in the Appendix. We would like to remark that the margin of examples is needed only to prove the mistake bound. In practice, the perceptron algorithm can directly use the disagreement among annotators eq:disagree.

[Perceptron [18]] Let be a sequence of training examples with for all . Suppose there exists a vector such that for all examples. Then, the number of mistakes made by the perceptron algorithm on this sequence is at most . The above result is the well-known mistake bound of perceptron and the proof is standard. We now state the main theorem of this paper. Let be a sequence of training examples along with their label and margin estimates, sorted in descending order based on the margin estimates, and with for all . Let and . Suppose there exists a vector such that for all examples. Divide the input space into equal regions, so that for any example in a region it holds that . Let denote the number of mistakes made by the perceptron in each of the regions, and let denote the total number of mistakes. Define

to be the standard deviation of

.

Then, the number of mistakes made by the perceptron on the sequence of training examples is bounded from above via:

 √ε≤R∥u∥+√R2∥u∥2+εsK(K+1)2√K−1^γ2^γ(K+1) .

We will use the following inequality in proving the above result. [Laguerre-Samuelson Inequality [19]] Let be a sequence of real numbers. Let and denote their mean and standard deviation, respectively. Then, the following inequality holds for all : .

Using the margin estimates , we divide the input space into equal regions, so that for any example in a region , . Let be the number of examples in these regions, respectively. Let be an indicator variable whose value is 1 if the algorithm makes a prediction mistake on example and 0 otherwise. Let be the number of mistakes made by the algorithm in region , be the total number of mistakes made by the algorithm.

We first bound , the weight vector after seeing examples, from above. If the algorithm makes a mistake at iteration , then , since . Since , we have .

Next, we bound from below. Consider the behavior of the algorithm on examples that are located in the farthest region . When a prediction mistake is made in this region at iteration , we have . The weight vector moves closer to by at least . After the algorithm sees all examples in the farthest region , we have (since ), and similarly for region , , and so on for other regions. Therefore, after the algorithm has seen examples, we have

 \ipwT+1u≥K∑k=1εkk^γ≥(εK−εs√K−1)(K(K+1)2)^γ .

where we used the Laguerre-Samuelson inequality to lower-bound for all , using the mean and standard deviation of .

Combining these lower and upper bounds, we get the following quadratic equation in :

whose solution is given by:

 √ε≤R∥u∥+√R2∥u∥2+εsK(K+1)2√K−1^γ2^γ(K+1) .

Note that if , i.e., when the number of mistakes made by the perceptron in each of the regions is the same, then we get the following mistake bound:

 ε≤4R2∥u∥2^γ2(K+1)2 ,

clearly improving the mistake bound of the standard perceptron algorithm. However, is not a realistic assumption. We therefore plot x-fold improvement of the mistake bound as a function of for a range of margins in Figure 2. The y-axis is the ratio of mistake bounds of interactive perceptron to standard perceptron with all examples scaled to have unit Euclidean length () and . From the figure, it is clear that even when , it is possible to get non-trivial improvements in the mistake bound.

The above analysis uses margin and label estimates, , , from our method described in the Appendix, which may not be exact. We therefore have to generalize the mistake bound to account for noise in these estimates. Let be the true margins of examples. Let denote margin noise factors such that , for all . These noise factors will be useful to account for overestimation and underestimation in , respectively.

Label noise essentially makes the classification problem linearly inseparable, and so the mistake bound can be analyzed using the method described in the work of Freund and Schapire [20] (see Theorem 2). Here, we define the deviation of an example as and let . As will become clear in the analysis, if is overestimated, then it does not affect the worst-case analysis of the mistake bound in the presence of label noise. If the labels were accurate, then , for all . With this notation in place, we are ready to analyze the mistake bound of perceptron in the noisy setting. Below, we state and prove the theorem for , i.e., when the number of mistakes made by the perceptron is the same in all the regions. The analysis is similar for , but involves tedious algebra and so we omit the details in this paper.

Let be a sequence of training examples along with their label and margin estimates, sorted in descending order based on the margin estimates, and with for all . Let and . Suppose there exists a vector such that for all the examples. Divide the input space into equal regions, so that for any example in a region it holds that . Assume that the number of mistakes made by the perceptron is equal in all the regions. Let denote the true margins of the examples, and suppose there exists such that for all . Define and let .

Then, the total number of mistakes made by the perceptron algorithm on the sequence of training examples is bounded from above via:

 ε≤4(Δ+R∥u∥)2ϵ2γu^γ2(K+1)2 .

(Sketch) Observe that margin noise affects only the analysis that bounds from below. When a prediction mistake is made in region , the weight vector moves closer to by at least . After the algorithm sees all examples in the farthest region , we have (since ). Therefore, margin noise has the effect of down-weighting the bound by a factor of . The rest of the proof follows using the same analysis as in the proof of Theorem 3. Note that margin noise affects the bound only when is overestimated because the margin appears only in the denominator when .

To account for label noise, we use the proof technique in Theorem 2 of Freund and Schapire’s paper [20]. The idea is to project the training examples into a higher dimensional space where the data becomes linearly separable and then invoke the mistake bound for the separable case. Specifically, for any example , we add dimensions and form a new vector such that the first coordinates remain the same as the original input, the ’th coordinate gets a value equal to (a constant to be specified later), and the remaining coordinates are set to zero. Let for all denote the examples in the higher dimensional space. Similarly, we add dimensions to the weight vector such that the first coordinates remain the same as the original input, and the ’th coordinate is set to , for all . Let denote the weight vector in the higher dimensional space.

With the above construction, we have . In other words, examples in the higher dimensional space are linearly separable by a margin . Also, note that the predictions made by the perceptron in the original space are the same as those in the higher dimensional space. To invoke Theorem 3, we need to bound the length of the training examples in the higher dimensional space, which is . Therefore the number of mistakes made by the perceptron is at most . It is easy to verify that the bound is minimized when , and hence the number of mistakes is bounded from above by .

## 4 Empirical Analysis

We conducted experiments on synthetic and benchmark datasets.333Software is available at https://github.com/svembu/ilearn. For all datasets, we simulated annotators to generate (noisy) labels in the following way. For a given set of training examples, , we first trained a linear model with the true binary labels and normalized the scores of all examples to lie in the range . We then transformed the scores via , so that examples close to the decision boundary with get a score and those far away from the decision boundary with get a score as shown in Figure 3.

For each example , we generated copies of its true label, and then flipped them based on a Bernoulli trial with success probability . This has the effect of generating (almost) equal numbers of labels with opposite sign and hence maximum disagreement among labels for examples that are close to the decision boundary. In the other extreme, labels of examples located far away from the decision boundary will not differ much. Furthermore, we flipped the sign of all labels based on a Bernoulli trial with success probability if the majority of labels is equal to the true label. This ensures that the majority of labels are noisy for examples close to the decision boundary. The noise parameter controls the amount of noise injected into the labels – high values result in weak disagreement among annotators and low label noise, as shown in Figure 3. Table 1 shows the noisy labels generated by ten annotators for on a simple set of one-dimensional examples in the range . As is evident from the table, the simulation is designed in such a way that an example close to (resp. far away from) the decision boundary will have a strong (resp. weak) disagreement among its labels.

### 4.1 Synthetic Datasets

We considered binary classification problems with examples generated from two 10-dimensional Gaussians centered at and

with unit variance. We generated noisy labels using the procedure described above. Specifically, we simulated 12 annotators – one of them always generated the true labels, another flipped all the true labels, the remaining 10 flipped labels using the simulation procedure described above. We randomly generated 100 datasets, each of them having 1000 training examples equally divided between the two classes. We used half of the data set for training and the other half for testing. In each experiment, we tuned the regularization parameter (

in Equation 3) by searching over the range using 10-fold cross validation on the training set, retrained the model on the entire training set with the best-performing parameter, and report the performance of this model on the test set. We experimented with a range of values. Recall that the parameter influences the reweighting of examples with small values placing (almost) equal weights on all the examples and large values placing a lot of weight on examples whose labels have a large disagreement (Figure 1). The parameter as mentioned before controls label noise. We compared the performance of the algorithm in interactive and non-interactive modes described in Section 2. The non-interactive algorithm is the one described in Raykar et al.’s paper [10].

The results are shown in Table 2

. We use area under the receiver operating characteristic curve (AU-ROC) and area under the precision-recall curve (AU-PRC) as performance metrics.

In the table, we show the number of times the AU-ROC and the AU-PRC of the interactive algorithm is higher than its non-interactive counterpart (#wins out of 100 datasets). We also show the two-sided p-value from the Wilcoxon signed-rank test. From the results, we note that the performance of the interactive algorithm is not significantly better than its non-interactive counterpart for small and large values of . This is expected because small values of reweight examples (almost) uniformly and so there is not much to gain when compared to running the algorithm in the non-interactive mode. In the other extreme, large values of tend to discard a large number of examples close to the decision boundary thereby degrading the overall performance of the algorithm in the interactive mode. gives the best performance. We also note that for high values of , i.e., weak disagreement among annotators and hence low label noise, the interactive algorithm offers no statistically significant gains when compared to the non-interactive algorithm. This, again, is as expected.

### 4.2 Benchmark Datasets

We used LibSVM benchmark datasets in our experiments. We selected binary classification datasets with at most 10,000 training examples and 300 features (Table 3), so that we could afford to train multiple linear models (100 in our experiments) for every dataset using standard solvers and also afford to tune hyperparameters carefully in a reasonable amount of time.

We generated noisy labels with the same procedure used in our experiments on synthetic data. Also, we tuned the regularization parameter in an identical manner. For datasets with no predefined training and test splits, we randomly selected 75% of the examples for training and used the rest for testing. For each dataset, we randomly generated 100 sets of noisy labels from the 12 annotators resulting in 100 different random versions of the dataset. The results are shown in Table 4.

In the table, we again show the number of times the AU-ROC and the AU-PRC of the interactive algorithm is higher than its non-interactive counterpart (#wins out of 100 datasets). We report results on only a subset of values that were found to give good results based on our experimental analysis with synthetic data. From the table, it is clear that the interactive algorithm performs significantly better than its non-interactive counterpart on the majority of datasets. On datasets where its performance was worse than that of the non-interactive algorithm, the results were not statistically significant across all parameter settings.

As a final remark, we would like to point out that the performance of the interactive algorithm dropped on some of the datasets with class imbalance. We therefore subsampled the training sets (using a different random subset in each of the 100 experiments for the given dataset) to make the classes balanced. We believe the issue of class imabalance is orthogonal to the problem we are addressing, but needs further investigation and so we leave this open for future work.

## 5 Concluding Remarks

Our experiments clearly demonstrate the benefits of interactive learning and how disagreement among annotators can be utilized to improve the performance of supervised learning algorithms. Furthermore, we presented theoretical evidence by analyzing the mistake bound of perceptron. The question as to whether annotators in real world scenarios behave according to our simulation model, i.e., if they tend to disagree more on difficult examples located close to the decision boundary when compared to easy examples farther away, is an open one. However, if this assumption holds then our experiments and theoretical analysis show that learning can be improved.

In real-world crowdsourcing applications, an example is typically labeled only by a subset of annotators. Although we did not consider this setting, we believe we could still use the disagreement among annotators to reweight examples, but the algorithm would require some modifications to handle missing labels. We leave this setting open for future work.

## References

• [1] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the International Conference on Machine Learning, 2009.
• [2] Faisal Khan, Xiaojin (Jerry) Zhu, and Bilge Mutlu. How do humans teach: On curriculum learning and teaching dimension. In Proceedings of the Annual Conference on Neural Information Processing Systems, 2011.
• [3] Xiaojin (Jerry) Zhu. Machine teaching for Bayesian learners in the exponential family. In Proceedings of the Annual Conference on Neural Information Processing Systems, 2013.
• [4] M. Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models. In Proceedings of the Annual Conference on Neural Information Processing Systems, 2010.
• [5] Lu Jiang, Deyu Meng, Shoou-I Yu, Zhenzhong Lan, Shiguang Shan, and Alexander G. Hauptmann. Self-paced learning with diversity. In Proceedings of the Annual Conference on Neural Information Processing Systems, 2014.
• [6] Lu Jiang, Deyu Meng, Qian Zhao, Shiguang Shan, and Alexander G. Hauptmann. Self-paced curriculum learning. In

Proceedings of the AAAI Conference on Artificial Intelligence

, 2015.
• [7] Yong Jae Lee and Kristen Grauman. Learning the easy things first: Self-paced visual category discovery. In

Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

, 2011.
• [8] Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. Cheap and fast - but is it good? Evaluating non-expert annotations for natural language tasks. In

Proceedings of the Conference on Empirical Methods in Natural Language Processing

, 2008.
• [9] Ofer Dekel and Ohad Shamir. Good learners for evil teachers. In Proceedings of the International Conference on Machine Learning, 2009.
• [10] Vikas C. Raykar, Shipeng Yu, Linda H. Zhao, Gerardo Hermosillo Valadez, Charles Florin, Luca Bogoni, and Linda Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297–1322, 2010.
• [11] Yan Yan, Rómer Rosales, Glenn Fung, Subramanian Ramanathan, and Jennifer G. Dy. Learning from multiple annotators with varying expertise. Machine Learning, 95(3):291–327, 2014.
• [12] Ruth Urner, Shai Ben-David, and Ohad Shamir. Learning from weak teachers. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2012.
• [13] Shahin Jabbari, Robert C. Holte, and Sandra Zilles. PAC-Learning with general class noise models. In Proceedings of the Annual German Conference on Artificial Intelligence, 2012.
• [14] Jean-Michel Renders Luigi Malagò, Nicolò Cesa-Bianchi.

Online active learning with strong and weak annotators.

In Proceedings of the NIPS 2014 Workshop on Crowdsourcing and Machine Learning, 2012.
• [15] Ofer Dekel, Claudio Gentile, and Karthik Sridharan. Selective sampling and active learning from single and multiple teachers. Journal of Machine Learning Research, 13:2655–2697, 2012.
• [16] Viktoriia Sharmanska, Daniel Hernández-Lobato, Miguel Hernández-Lobato, and Novi Quadrianto. Ambiguity helps: Classification with disagreements in crowdsourced annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
• [17] Ryan Rifkin, Gene Yeo, and Tomaso Poggio. Regularized least-squares classification. Advances in Learning Theory: Methods, Model and Applications. NATO Science Series III: Computer and Systems Sciences, 190:131–153, 2003.
• [18] Albert B.J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the Mathematical Theory of Automata, 1962.
• [19] Paul Samuelson. How deviant can you be? Journal of the American Statistical Association, 63(324):1522–1525, 1968.
• [20] Yoav Freund and Robert E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999.
• [21] Nello Cristianini, John Shawe-Taylor, André Elisseeff, and Jaz S. Kandola. On kernel-target alignment. In Proceedings of the Annual Conference on Neural Information Processing Systems, 2001.

## Appendix: Estimating the Margin of an Example

The margin of an example with respect to a linear function is defined as . Examples close to the decision boundary will have a small margin and those that are farther away will have a large margin. We assume that an annotator labels an example using the true labels of all neighboring examples in a ball of some radius centered at . The size of an annotator’s ball is inversely proportional to her strength (expertise). This model of annotators is similar to the one used in Urner et al.’s analysis [12]. Note that neither the true labels nor the size of the annotator’s ball is known to us. Our only input is a set of examples with corresponding (noisy) labels from annotators. Given this input, the goal is to estimate the radius of the annotator’s ball. This will then allow us to estimate the margin of an example, i.e., its distance from the separating hyperplane. We proceed in two steps: first, we describe a method to estimate the annotators’ expertise scores and the ground-truth labels; second, we use these estimates to compute the radii of the annotators’ balls.

#### Estimating an annotator’s expertise, z.

We use a variant of kernel target alignment [21] to estimate the expertise score of each annotator. Let denote the (centered) kernel matrix on the input examples, i.e., with , where is a feature map. For linear models, the entries of the kernel matrix are pairwise dot-products of training examples. We consider the following optimization problem to estimate an annotator’s expertise score:

 ^z=\argminz∈[0,1]Lm∑i=1m∑j=1(kij−1L∑ℓzℓy(ℓ)iy(ℓ)j)2 .

This is a constrained least-squares regression problem. The complexity of this optimization problem is quadratic in the number of examples. However, we can use stochastic (projected) gradient descent to remove the dependence on the number of examples.

The ground-truth label of an example can be estimated by taking the weighted average of labels provided by the annotators, i.e., for each given tuple , we form a new training example with and let .

#### Estimating the radius of an annotator’s ball.

Let denote the radii of the annotators’ balls. Let denote the ball of radius centered at , with being a distance metric, such as the Euclidean distance for linear models, defined on the input space . Given the expertise score for an annotator , we estimate the radius of her ball by solving the following univariate optimization problem:

 ^rℓ=\argminr∈\R+m∑i=1⎛⎜ ⎜ ⎜⎝∑(z,^y)∈Br(xi)∩^S^y|Br(xi)∩^S|−y(ℓ)i⎞⎟ ⎟ ⎟⎠2 .

Intuitively, the above optimization problem is trying to estimate the radius of the annotator’s ball by minimizing the squared difference between the (noisy) label of the annotator and the average of the estimates of true labels of all neighboring examples in the ball.

#### Putting it all together.

Given a training example , its noisy labels , an estimate of the ground-truth label , and the radius estimates of the annotators’ balls, we compute a lower bound on the margin of , i.e., its distance from the decision boundary, as follows. Centered at , we draw nested balls of increasing size, one for each annotator using her radius. Starting from the annotator with the smallest ball, we compare her noisy label with the ground-truth label estimate. At some ball/expert, the noisy label and the ground-truth label estimate will differ, and the radius of this ball is a lower bound on the distance of from the decision boundary.