Statistical Active Learning Algorithms for Noise Tolerance and Differential Privacy

07/11/2013 ∙ by Maria-Florina Balcan, et al. ∙ Harvard University 0

We describe a framework for designing efficient active learning algorithms that are tolerant to random classification noise and are differentially-private. The framework is based on active learning algorithms that are statistical in the sense that they rely on estimates of expectations of functions of filtered random examples. It builds on the powerful statistical query framework of Kearns (1993). We show that any efficient active statistical learning algorithm can be automatically converted to an efficient active learning algorithm which is tolerant to random classification noise as well as other forms of "uncorrelated" noise. The complexity of the resulting algorithms has information-theoretically optimal quadratic dependence on 1/(1-2η), where η is the noise rate. We show that commonly studied concept classes including thresholds, rectangles, and linear separators can be efficiently actively learned in our framework. These results combined with our generic conversion lead to the first computationally-efficient algorithms for actively learning some of these concept classes in the presence of random classification noise that provide exponential improvement in the dependence on the error ϵ over their passive counterparts. In addition, we show that our algorithms can be automatically converted to efficient active differentially-private algorithms. This leads to the first differentially-private active learning algorithms with exponential label savings over the passive case.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Most classic machine learning methods depend on the assumption that humans can annotate all the data available for training. However, many modern machine learning applications have massive amounts of unannotated or unlabeled data. As a consequence, there has been tremendous interest both in machine learning and its application areas in designing algorithms that most efficiently utilize the available data, while minimizing the need for human intervention. An extensively used and studied technique is active learning, where the algorithm is presented with a large pool of unlabeled examples and can interactively ask for the labels of examples of its own choosing from the pool, with the goal to drastically reduce labeling effort. This has been a major area of machine learning research in the past decade 

[Das11, Han], with several exciting developments on understanding its underlying statistical principles [FSST97, Das05, BBL06, BBZ07, Han07, DHM07, CN07, BHW08, Kol10, BHLZ10, Wan11, RR11, BH12]

. In particular, several general characterizations have been developed for describing when active learning can in principle have an advantage over the classic passive supervised learning paradigm, and by how much. While the label complexity aspect of active learning has been intensively studied and is currently well understood, the question of providing computationally efficient noise tolerant active learning algorithms has remained largely open. In particular, prior to this work, there were no known efficient active algorithms for concept classes of super-constant VC-dimension that are provably robust to random and independent noise while giving improvements over the passive case.

1.1 Our Results

We propose a framework for designing efficient (polynomial time) active learning algorithms which is based on restricting the way in which examples (both labeled and unlabeled) are accessed by the algorithm. These restricted algorithms can be easily simulated using active sampling and, in addition, possess a number of other useful properties. The main property we will consider is tolerance to random classification noise of rate

(each label is flipped randomly and independently with probability

[AL88]). Further, as we will show, the algorithms are tolerant to other forms of noise and can be simulated in a differentially-private way.

In our restriction, instead of access to random examples from some distribution over the learning algorithm only gets “active” estimates of the statistical properties of in the following sense. The algorithm can choose any filter function and a query function for any and . For simplicity we can think of as an indicator function of some set of “informative” points and of as some useful property of the target function. For this pair of functions the learning algorithm can get an estimate of . For and chosen by the algorithm the estimate is provided to within tolerance as long as (nothing is guaranteed otherwise). The key point it that when we simulate this query from random examples, the inverse of corresponds to the label complexity of the algorithm and the inverse of corresponds to its unlabeled sample complexity. Such a query is referred to as active statistical query (SQ) and algorithms using active SQs are referred to as active statistical algorithms.

Our framework builds on the classic statistical query (SQ) learning framework of Kearns [Kea98] defined in the context of PAC learning model [Val84]. The SQ model is based on estimates of expectations of functions of examples (but without the additional filter function) and was defined in order to design efficient noise tolerant algorithms in the PAC model. Despite the restrictive form, most of the learning algorithms in the PAC model and other standard techniques in machine learning and statistics used for problems over distributions have SQ analogues [Kea98, BFKV97, BDMN05, CKL06, FGR13]111The sample complexity of the SQ analogues might be polynomially larger though.. Further, statistical algorithms enjoy additional properties: they can be simulated in a differentially-private way [BDMN05], automatically parallelized on multi-core architectures [CKL06] and have known information-theoretic characterizations of query complexity [BFJ94, Fel12]. As we show, our framework inherits the strengths of the SQ model while, as we will argue, capturing the power of active learning.

At a first glance being active and statistical appear to be incompatible requirements on the algorithm. Active algorithms typically make label query decisions on the basis of examining individual samples (for example as in binary search for learning a threshold or the algorithms in [FSST97, DHM07, DKM09]). At the same time statistical algorithms can only examine properties of the underlying distribution. But there also exist a number of active learning algorithms that can be seen as applying passive learning techniques to batches of examples that are obtained from querying labels of samples that satisfy the same filter. These include the general algorithm  [BBL06] and, for example, algorithms in [BBZ07, DH08, BDL09, BL13]. As we show, we can build on these techniques to provide algorithms that fit our framework.

We start by presenting a general reduction showing that any efficient active statistical learning algorithm can be automatically converted to an efficient active learning algorithm which is tolerant to random classification noise as well as other forms of “uncorrelated” noise. The sample complexity of the resulting algorithms depends just quadratically on , where is the noise rate.

We then demonstrate the generality of our framework by showing that the most commonly studied concept classes including thresholds, balanced rectangles, and homogenous linear separators can be efficiently actively learned via active statistical algorithms. For these concept classes, we design efficient active learning algorithms that are statistical and provide the same exponential improvements in the dependence on the error over passive learning as their non-statistical counterparts.

The primary problem we consider is active learning of homogeneous halfspaces a problem that has attracted a lot of interest in the theory of active learning [FSST97, Das05, BBZ07, BDL09, DKM09, CGZ10, DGS12, BL13, GSSS13]. We describe two algorithms for the problem. First, building on insights from margin based analysis of active learning [BBZ07, BL13], we give an active statistical learning algorithm for homogeneous halfspaces over all isotropic log-concave distributions, a wide class of distributions that includes many well-studied density functions and has played an important role in several areas including sampling, optimization, integration, and learning [LV07]. Our algorithm for this setting proceeds in rounds; in round we build a better approximation to the target function by using a passive SQ learning algorithm (e.g., the one of [DV04]) over a distribution

that is a mixture of distributions in which each component is the original distribution conditioned on being within a certain distance from the hyperplane defined by previous approximations

. To perform passive statistical queries relative to we use active SQs with a corresponding real valued filter. This algorithm is computationally efficient and uses only active statistical queries of tolerance inverse-polynomial in the dimension and .

For the special case of the uniform distribution over the unit ball we give a new, simpler and substantially more efficient active statistical learning algorithm. Our algorithm is based on measuring the error of a halfspace conditioned on being within some margin of that halfspace. We show that such measurements performed on the perturbations of the current hypothesis along the

basis vectors can be combined to derive a better hypothesis. This approach differs substantially from the previous algorithms for this problem

[BBZ07, DKM09]. The algorithm is computationally efficient and uses active SQs with tolerance of and filter tolerance of .

These results, combined with our generic simulation of active statistical algorithms in the presence of random classification noise (RCN) lead to the first known computationally efficient algorithms for actively learning halfspaces which are RCN tolerant and give provable label savings over the passive case. For the uniform distribution case this leads to an algorithm with sample complexity of and for the general isotropic log-concave case we get sample complexity of . This is worse than the sample complexity in the noiseless case which is just [BL13]. However, compared to passive learning in the presence of RCN, our algorithms have exponentially better dependence on and essentially the same dependence on and . One issue with the generic simulation is that it requires knowledge of (or an almost precise estimate). Standard approach to dealing with this issue does not always work in the active setting and for our log-concave and the uniform distribution algorithms we give a specialized argument that preserves the exponential improvement in the dependence on .

Differentially-private active learning: In many application of machine learning such as medical and financial record analysis, data is both sensitive and expensive to label. However, to the best of our knowledge, there are no formal results addressing both of these constraints. We address the problem by defining a natural model of differentially-private active learning. In our model we assume that a learner has full access to unlabeled portion of some database of examples which correspond to records of individual participants in the database. In addition, for every element of the database the learner can request the label of that element. As usual, the goal is to minimize the number of label requests (such setup is referred to as pool-based active learning [MN98]). In addition, we would like to preserve the differential privacy of the participants in the database, a now-standard notion of privacy introduced in [DMNS06]. Informally speaking, an algorithm is differentially private if adding any record to (or removing a record from ) does not affect the probability that any specific hypothesis will be output by the algorithm significantly.

As first shown by Blum et al. [BDMN05], SQ algorithms can be automatically translated into differentially-private algorithms by using the so-called Laplace mechanism (see also [KLN11]). Using a similar approach, we show that active SQ learning algorithms can be automatically transformed into differentially-private active learning algorithms. As a consequence, for all the classes for which we provide statistical active learning algorithms that can be simulated by using only labeled examples (including thresholds and halfspaces), we can learn and preserve privacy with much fewer label requests than those required by even non-private classic passive learning algorithms, and can do so even when in our model the privacy parameter is very small. Note that while we focus on the number of label requests, the algorithms also preserve the differential privacy of the unlabeled points.

1.2 Additional Related Work

As we have mentioned, most prior theoretical work on active learning focuses on either sample complexity bounds (without regard for efficiency) or the noiseless case. For random classification noise in particular, [BH12] provides a sample complexity analysis based on the notion of splitting index that is optimal up to factors and works for general concept classes and distributions, but it is not computationally efficient. In addition, several works give active learning algorithms with empirical evidence of robustness to certain types of noise [BDL09, GSSS13];

In [CGZ10, DGS12] online learning algorithms in the selective sampling framework are presented, where labels must be actively queried before they are revealed. Under the assumption that the label conditional distribution is a linear function determined by a fixed target vector, they provide bounds on the regret of the algorithm and on the number of labels it queries when faced with an adaptive adversarial strategy of generating the instances. As pointed out in [DGS12], these results can also be converted to a distributional PAC setting where instances are drawn i.i.d. In this setting they obtain exponential improvement in label complexity over passive learning. These interesting results and techniques are not directly comparable to ours. Our framework is not restricted to halfspaces. Another important difference is that (as pointed out in [GSSS13]) the exponential improvement they give is not possible in the noiseless version of their setting. In other words, the addition of linear noise defined by the target makes the problem easier for active sampling. By contrast RCN can only make the classification task harder than in the realizable case.

Among the so called disagreement-based algorithms that provably work under very general noise models (adversarial label noise) and for general concept classes [BBL06, Kol10, DHM07, BHLZ10, Wan11, RR11, BH12, Han], those of Dasgupta, Hsu, and Monteleoni [DHM07] and Beygelzimer, Hsu, Langford, and Zhang [BHLZ10] are most amenable to implementation. While more amenable to implementation than other disagreement-based techniques, these algorithms assume the existence of a computationally efficient passive learning algorithm (for the concept class at hand) that can minimize the empirical error in the adversarial label noise — however, such algorithms are not known to exist for most concept classes, including linear separators.

Following the original publication of our work, Awasthi et al.  [ABL14] give a polynomial-time active learning algorithm for learning linear separators in the presence of adversarial forms of noise. Their algorithm is the first one that can tolerate both adversarial label noise and malicious noise (where the adversary can corrupt both the instance part and the label part of the examples) as long as the rate of noise . We note that these results are not comparable to ours as we need the noise to be “uncorrelated” but can deal with noise of any rate (with complexity growing with ).

Organization: Our model, its properties and several illustrative examples (including threshold functions and balanced rectangles) are given in Section 2. Our algorithm for learning homogeneous halfspaces over log-concave and uniform distributions are given in Section 3 and Section 4 respectively. The formal statement of differentially-private simulation is given in Section 5.

2 Active Statistical Algorithms

Let be a domain and be a distribution over labeled examples on . We represent such a distribution by a pair where is the marginal distribution of on and is a function defined as . We will be considering learning in the PAC model (realizable case) where is a boolean function, possibly corrupted by random noise.

When learning with respect to a distribution , an active statistical learner has access to active statistical queries. A query of this type is a pair of functions , where is the filter function which for a point , specifies the probability with which the label of should be queried. The function is the query function and depends on both point and the label. The filter function defines the distribution conditioned on as follows: for each the density function is defined as . Note that if is an indicator function of some set then is exactly conditioned on being in . Let denote the conditioned distribution . In addition, a query has two tolerance parameters: filter tolerance and query tolerance . In response to such a query the algorithm obtains a value such that if then

(and nothing is guaranteed when ).

An active statistical learning algorithm can also ask target-independent queries with tolerance which are just queries over unlabeled samples. That is for a query the algorithm obtains a value , such that . Such queries are not necessary when is known to the learner.

For the purposes of obtaining noise tolerant algorithms one can relax the requirements of model and give the learning algorithm access to unlabelled samples. A similar variant of the model was considered in the context of SQ model [Kea98, BFKV97]. We refer to this variant as label-statistical. Label-statistical algorithms do not need access to target-independent queries access as they can simulate those using unlabelled samples.

Our definition generalizes the statistical query framework of Kearns [Kea98] which does not include filtering function, in other words a query is just a function and it has a single tolerance parameter . By definition, an active SQ with tolerance relative to is the same as a passive statistical query with tolerance relative to the distribution . In particular, a (passive) SQ is equivalent to an active SQ with filter and filter tolerance .

Finally we note that from the definition of active SQ we can see that

This implies that an active statistical query can be estimated using two passive statistical queries. However to estimate with tolerance one needs to estimate with tolerance which can be much lower than . Tolerance of a SQ directly corresponds to the number of examples needed to evaluate it and therefore simulating active SQs passively might require many more examples.

2.1 Simulating Active Statistical Queries

In our model, the algorithm operates via statistical queries. In this section we describe how the answers to these queries can be simulated from random examples, which immediately implies that our algorithms can be transformed into active learning algorithms in the usual model [Das11].

We first note that a valid response to a target-independent query with tolerance can be obtained, with probability at least , using unlabeled samples.

A natural way of simulating an active SQ is by filtering points drawn randomly from : draw a random point , let

be drawn from Bernoulli distribution with probability of

being ; ask for the label of when . The points for which we ask for a label are distributed according to . This implies that the empirical average of on labeled examples will then give . Formally we get the following theorem.

Theorem 2.1.

Let be a distribution over . There exists an active sampling algorithm that given functions , , values , , , and access to samples from , with probability at least , outputs a valid response to active statistical query with tolerance parameters . The algorithm uses labeled examples from and unlabeled samples from .

Proof.

The Chernoff-Hoeffding bounds imply that for some , the empirical mean of on examples that are drawn randomly from will, with probability at least , be within of . We can also assume that since any value would be a valid response to the query when this assumption does not hold. By the standard multiplicative form of the Chernoff bound we also know that given random samples from , with probability at least , at least of the samples will pass the filter . Therefore with, probability at least , we will obtain at least samples from filtered using and labeled examples on these points will give an estimate of with tolerance .

This procedure gives dependence on confidence (and not the claimed ). To get the claimed dependence we can use a standard confidence boosting technique. We run the above procedure with , times and let denote the results. The simulation returns the median of ’s. The Chernoff bound implies that for , with probability at least , at least half of the ’s satisfy the condition . In particular, the median satisfies this condition. The dependence on of sample complexity is now as claimed. ∎

We remark that in some cases better sample complexity bounds can be obtained using multiplicative forms of the Chernoff-Hoeffding bounds (e.g. [AD98]).

A direct way to simulate all the queries of an active SQ algorithm is to estimate the response to each query using fresh samples and use the union bound to ensure that, with probability at least , all queries are answered correctly. Such direct simulation of an algorithm that uses at most queries can be done using labeled examples and unlabeled samples. However, in many cases a more careful analysis can be used to reduce the sample complexity of simulation. Labeled examples can be shared to simulate queries that use the same filter and do not depend on each other. This implies that the sample size sufficient for simulating non-adaptive queries with the same filter scales logarithmically with . More generally, given a set of query functions (possibly chosen adaptively) which belong to some set of low complexity (such as VC dimension) one can reduce the sample complexity of estimating the answers to all queries (with the same filter) by invoking the standard bounds based on uniform convergence (e.g. [BEHW89, Vap98]).

2.2 Noise tolerance

An important property of the simulation described in Theorem 2.1 is that it can be easily adapted to the case when the labels are corrupted by random classification noise [AL88]. For a distribution let denote the distribution with the label flipped with probability randomly and independently of an example. It is easy to see that . We now show that, as in the SQ model [Kea98], active statistical queries can be simulated given examples from .

Theorem 2.2.

Let be a distribution over examples and let be a noise rate. There exists an active sampling algorithm that given functions , , values , , , , and access to samples from , with probability at least , outputs a valid response to active statistical query with tolerance parameters . The algorithm uses labeled examples from and unlabeled samples from .

Proof.

Using a simple observation from [BF02], we first decompose the statistical query into two parts: one that computes a correlation with the label and the other that does not depend on the label altogether. Namely,

(1)

Clearly, to estimate the value of with tolerance it is sufficient to estimate the values of and with tolerance . The latter expression does not depend on the label and, in particular, is not affected by noise. Therefore it can be estimated as before using in place of . At the same time we can use the independence of noise to conclude222For any function that does not depend on the label, we have: . The first equality follows from the fact that under , for any given , there is a chance that the label is the same as under , and an chance that the label is the negation of the label obtained from . ,

This means that we can estimate with tolerance by estimating with tolerance and then multiplying the result by . The estimation of with tolerance can be done exactly as in Theorem 2.1. ∎

Note that the sample complexity of the resulting active sampling algorithm has information-theoretically optimal quadratic dependence on , where is the noise rate. Note that RCN does not affect the unlabelled samples so algorithms which are only label-statistical algorithms can also be simulated in the presence of RCN.

Remark 2.3.

This simulation assumes that is given to the algorithm exactly. It is easy to see from the proof, that any value such that can be used in place of (with the tolerance of estimating set to ). In some learning scenarios even an approximate value of is not known but it is known that . To address this issue one can construct a sequence of guesses of , run the learning algorithm with each of those guesses in place of the true and let be the resulting hypotheses [Kea98]. One can then return the hypothesis among those that has the best agreement with a suitably large sample. It is not hard to see that guesses will suffice for this strategy to work [AD98].

Passive hypothesis testing requires labeled examples and might be too expensive to be used with active learning algorithms. It is unclear if there exists a general approach for dealing with unknown in the active learning setting that does not increase substantially the labeled example complexity. However, as we will demonstrate, in the context of specific active learning algorithms variants of this approach can be used to solve the problem.

We now show that more general types of noise can be tolerated as long as they are “uncorrelated” with the queries and the target function. Namely, we represent label noise using a function , where gives the probability that the label of is flipped. The rate of when learning with respect to marginal distribution over is . For a distribution over examples, we denote by the distribution corrupted by label noise . It is easy to see that . Intuitively, is “uncorrelated” with a query if the way that deviates from its rate is almost orthogonal to the query on the target distribution.

Definition 2.4.

Let be a distribution over examples and . For functions , , we say that a noise function is -uncorrelated with and over if,

In this definition is the expectation of coin that is flipped with probability , whereas is the part of the query which measures the correlation with the label. We now give an analogue of Theorem 2.2 for this more general setting.

Theorem 2.5.

Let be a distribution over examples, , be a query and a filter functions, and be a noise function that is -uncorrelated with and over . There exists an active sampling algorithm that given functions and , values , , , , and access to samples from , with probability at least , outputs a valid response to active statistical query with tolerance parameters . The algorithm uses labeled examples from and unlabeled samples from .

Proof.

As in the proof of Theorem 2.2, we note that it is sufficient to estimate the value of

within tolerance (since does not depend on the label and can be estimated as before). Now

where , since is -uncorrelated with and over .

This means that we can estimate with tolerance by estimating with tolerance and then multiplying the result by . The estimation of with tolerance can be done exactly as in Theorem 2.1. ∎

An immediate implication of Theorem 2.5 is that one can simulate an active SQ algorithm using examples corrupted by noise as long as is -uncorrelated with all ’s queries of tolerance for some fixed .

Clearly, random classification noise of rate has function for all . It is therefore -uncorrelated with any query over any distribution. Another simple type of noise that is uncorrelated with most queries over most distributions is the one where noise function is chosen randomly so that for every point the noise rate is chosen randomly and independently from some distribution with expectation (not necessarily the same for all points). For any fixed query and target distribution, the expected correlation is 0. If the probability mass of every single point of the domain is small enough compared to (the inverse of the logarithm of) the size of space of queries and target distributions then standard concentration inequalities will imply that the correlation will be small with high probability.

We would like to note that the noise models considered here are not directly comparable to the well-studied Tsybakov’s and Massart’s noise conditions [BBL05]. However, it appears that from a computational point of view our noise model is significantly more benign than these conditions as they do not impose any structure on the noise and only limit the rate.

2.3 Simple examples

Thresholds:

We show that a classic example of active learning a threshold function on an interval can be easily expressed using active SQs. For simplicity and without loss of generality we can assume that the interval is and the distribution is uniform over it. 333As usual, we can bring the distribution to be close enough to this form using unlabeled samples or target-independent queries, where is the number of bits needed to represent our examples. Assume that we know that the threshold belongs to the interval . We ask a query with filter which is the indicator function of the interval with tolerance and filter tolerance . Let be the response to the query. By definition, and therefore we have that . Note that,

We can therefore conclude that which means that . Note that the length of this interval is at most . This means that after at most iterations we will reach an interval of length at most . In each iteration only constant tolerance is necessary and filter tolerance is never below . A direct simulation of this algorithm can be done using labeled examples and unlabeled samples.

Axis-aligned rectangles:

Next we show that learning of thresholds can be used to obtain a simple algorithm for learning axis-aligned rectangles whose weight under the target distribution is not too small. Namely, we assume that the target function satisfies that . In the one dimensional case, we just need to learn an interval. After scaling the distribution to be uniform on we know that the target interval has length at least . We first need to find a point inside that interval. To do this we consider the intervals for . At least one of these intervals in fully included in . Hence using an active statistical query with query function conditioned on being in interval for each and with tolerance we are guaranteed to find an interval for which the answer is at least . The midpoint of any interval for which the answer to the query is at least must be inside the target interval. Let the midpoint be . We can now use two binary searches with accuracy to find the lower and upper endpoints of the target interval in the intervals and , respectively. This will require active SQs of tolerance . As usual, the -dimensional axis-aligned rectangles can be reduced to interval learning problems with error  [KV94]. This gives an active statistical algorithm using active SQs of tolerance and filter tolerance .

We now note that the general and well-studied algorithm of [BBL06] falls naturally into our framework. At a high level, the algorithm is an iterative, disagreement-based

active learning algorithm. It maintains a set of surviving classifiers

, and in each round the algorithm asks for the labels of a few random points that fall in the current region of disagreement of the surviving classifiers. Formally, the region of disagreement of a set of classifiers is the of set of instances such that for each there exist two classifiers that disagree about the label of . Based on the queried labels, the algorithm then eliminates hypotheses that were still under consideration, but only if it is statistically confident (given the labels queried in the last round) that they are suboptimal. In essence, in each round only needs to estimate the error rates (of hypotheses still under consideration) under the conditional distribution of being in the region of disagreement. The key point is that this can be easily done via active statistical queries. Note that while the number of active statistical queries needed to do this could be large, the number of labeled examples needed to simulate these queries is essentially the same as the number of labeled examples needed by the known analyses [Han07, Han]. While in general the required computation of the disagreement region and manipulations of the hypothesis space cannot be done efficiently, efficient implementation is possible in a number of simple cases such as when the VC dimension of the concept class is a constant. It is not hard to see that in these cases the implementation can also be done using a statistical algorithm.

3 Learning halfspaces with respect to log-concave distributions

In this section we present a reduction from active learning to passive learning of homogeneous linear separators under log-concave distributions. Combining it with the SQ algorithm for learning halfspaces in the passive learning setting due to Dunagan and Vempala [DV04], we obtain the first efficient noise-tolerant active learning of homogeneous halfspaces for any isotropic log-concave distribution.

Our reduction proceeds in rounds; in round we build a better approximation to the target function by using the passive SQ learning algorithm  [DV04] over a distribution that is a mixture of distributions in which each component is the original distribution conditioned on being within a certain distance from the hyperplane defined by previous approximations . To perform passive statistical queries relative to we use active SQs with a corresponding real valued filter. Our analysis builds on the analysis of the margin-based algorithms due to [BBZ07, BL13]. However, note that in the standard margin-based analysis only points close to the current hypothesis are queried in round . As a result the analysis of our algorithm is somewhat different from that in earlier work  [BBZ07, BL13].

3.1 Preliminaries

For a unit vector we denote by the function defined by the homogenous hyperplane orthogonal to , that is . Let denote the concept class of all homogeneous halfspaces.

Definition 3.1.

A distribution over is log-concave if is concave, where is its associated density function. It is isotropic if its mean is the origin and its covariance matrix is the identity.

Log-concave distributions form a broad class of distributions: for example, the Gaussian, Logistic, Exponential, and uniform distribution over any convex set are log-concave distributions.

Next, we state several simple properties of log-concave densities from [LV07].

Lemma 3.2.

There exists a constant such that for any isotropic log-concave distribution on , every unit vector and ,

Lemma 3.3.

There exists a constant such that for any isotropic log-concave on and any two unit vectors and in we have , where denotes the angle between and .

For our applications the key property of log-concave densities proved in  [BL13] is given in the following lemma.

Lemma 3.4.

For any constant , there exists a constant such that the following holds. Let and be two unit vectors in , and assume that . Assume that is isotropic log-concave in . Then

(2)

We now state the passive SQ algorithm for learning halfspaces which will be the basis of our active SQ algorithm.

Theorem 3.5.

There exists a SQ algorithm LearnHS that learns to accuracy over any distribution , where is an isotropic log-concave distribution and is a filter function. Further LearnHS outputs a homogeneous halfspace, runs in time polynomial in , and and uses SQs of tolerance , where .

We use the Dunagan-Vempala algorithm for learning halfspaces to prove this algorithm [DV04]. The bounds on the complexity of the algorithm follow easily from the properties of log-concave distributions. Further details of the analysis and related discussion appear in Appendix A.

3.2 Active learning algorithm

Theorem 3.6.

There exists an active SQ algorithm ActiveLearnHS-LogC (Algorithm 1) that for any isotropic log-concave distribution on , learns over to accuracy in time and using active SQs of tolerance and filter tolerance .

1:  %% Constants , , and are determined by the analysis.
2:  Run LearnHS with error to obtain .
3:  for  to  do
4:     Let
5:     Let equal the indicator function of being within margin of
6:     Let
7:     Run LearnHS over with error by using active queries with filter and filter tolerance to obtain
8:  end for
9:  return  
Algorithm 1 ActiveLearnHS-LogC: Active SQ learning of homogeneous halfspaces over isotropic log-concave densities
Proof.

Let be the constant given by Lemma 3.3 and let be the constant given by Lemma 3.4 when . Let and . For every define . Let denote the target halfspace and for any unit vector and distribution we define .

We define via the iterative process described in Algorithm 1. Note that active SQs are used to allow us to execute LearnHS on . That is a SQ of tolerance asked by LearnHS (relative to ) is replaced with an active SQ of tolerance . The response to the active SQ is a valid response to the query of LearnHS as long as . We will prove that this condition indeed holds later. We now prove by induction on that after iterations, we have that every such that for all satisfies . In addition, satisfies this condition.

The case follows from the properties of LearnHS(without loss of generality ). Assume now that the claim is true for (). Let and . Note that is defined to be the indicator function of . By the inductive hypothesis we know that .

Consider an arbitrary separator that satisfies for all . By the inductive hypothesis, we know that . By Lemma 3.3 we have and . This implies . By our choice of and Lemma 3.4, we obtain:

Therefore,

(3)

By the inductive hypothesis, we also have:

The set consists of points such that fall into interval . By Lemma 3.2, this implies that and therefore,

(4)

Now by combining eq. (3) and eq. (3.2) we get that as necessary to establish the first part of the inductive hypothesis. By the properties of LearnHS, . By the definition of ,

This implies that for every , establishing the second part of the inductive hypothesis.

Inductive hypothesis immediately implies that . Therefore to finish the proof we only need to establish the bound on running time and query complexity of the algorithm. To establish the lower bound on filter tolerance we observe that by Lemma 3.2, for every ,

This implies that for every ,

Each execution of LearnHS is with error and there are at most such executions. Now by Theorem A.3 this implies that the total running time, number of queries and the inverse of query tolerance are upper-bounded by a polynomial in and . ∎

We remark that, as usual, we can first bring the distribution to an isotropic position by using target independent queries to estimate the mean and the covariance matrix of the distribution [LV07]. Therefore our algorithm can be used to learn halfspaces over general log-concave densities as long as the target halfspace passes through the mean of the density.

We can now apply Theorem 2.2 (or more generally Theorem 2.5) to obtain an efficient active learning algorithm for homogeneous halfspaces over log-concave densities in the presence of random classification noise of known rate. Further since our algorithm relies on LearnHS which can also be simulated when the noise rate is unknown (see Remark 2.3) we obtain an active algorithm which does not require the knowledge of the noise rate.

Corollary 3.7.

There exists a polynomial-time active learning algorithm that for any , learns over any log-concave distributions with random classification noise of rate to error using labeled examples and a polynomial number of unlabeled samples.

4 Learning halfspaces over the uniform distribution

The algorithm presented in Section 3 relies on the relatively involved and computationally costly algorithm of Dunagan and Vempala [DV04]

for learning halfspaces over general distributions. Similarly, other active learning algorithms for halfspaces often rely on the computationally costly linear program solving

[BBZ07, BL13]. For the special case of the uniform distribution on the unit sphere we now give a substantially simpler and more efficient algorithm in terms of both sample and computational complexity. This setting was studied in [BBZ07, DKM09].

We remark that the uniform distribution over the unit sphere is not log-concave and therefore, in general, an algorithm for the isotropic log-concave case might not imply an algorithm for the uniform distribution over the unit sphere. However a more careful look at the known active algorithms for the isotropic log-concave case [BBZ07, BL13] and at the algorithms in this work shows that minimization of error is performed over homogeneous halfspaces. For any homogeneous halfspace , any and , . This implies that for algorithms optimizing the error over homogenous halfspaces any two spherically symmetric distributions are equivalent. In particular, the uniform distribution over the sphere is equivalent to the uniform distribution over the unit ball – an isotropic and log-concave distribution.

For a dimension let or the unit sphere in dimensions. Let denote the uniform distribution over . Unless specified otherwise, in this section all probabilities and expectations are relative to . We would also like to mention explicitly the following trivial lemma relating the accuracy of an estimate of to the accuracy of an estimate of .

Lemma 4.1.

Let be a differentiable function and let and be any values in some interval . Then

The lemma follows directly from the mean value theorem. Also note that given an estimate for we can always assume that since otherwise can be replaced with the closest point in which will be at least as close to as .

We start with an outline of a non-active and simpler version of the algorithm that demonstrates one of the ideas of the active SQ algorithm. To the best of our knowledge the algorithm we present is also the simplest and most efficient (passive) SQ algorithm for the problem. A less efficient algorithm is given in [KVV10].

4.1 Learning using (passive) SQs

Let denote the normal vector of the target hyperplane and let be any unit vector. Instead of arguing about the disagreement between and directly we will use the (Euclidean) distance between and as a proxy for disagreement. It is easy to see that, up to a small constant factor, this distance behaves like disagreement.

Lemma 4.2.

For any unit vectors and ,

  1. Error is upper bounded by half the distance: ;

  2. To estimate distance it is sufficient to estimate error: for every value ,

Proof.

The angle between and equals . Hence