Learning Depth-Three Neural Networks in Polynomial Time

09/18/2017 ∙ by Surbhi Goel, et al. ∙ The University of Texas at Austin 0

We give a polynomial-time algorithm for learning neural networks with one hidden layer of sigmoids feeding into any smooth, monotone activation function (e.g., sigmoid or ReLU). We make no assumptions on the structure of the network, and the algorithm succeeds with respect to any distribution on the unit ball in n dimensions (hidden weight vectors also have unit norm). This is the first assumption-free, provably efficient algorithm for learning neural networks with more than one hidden layer. Our algorithm-- Alphatron-- is a simple, iterative update rule that combines isotonic regression with kernel methods. It outputs a hypothesis that yields efficient oracle access to interpretable features. It also suggests a new approach to Boolean function learning via smooth relaxations of hard thresholds, sidestepping traditional hardness results from computational learning theory. Along these lines, we give improved results for a number of longstanding problems related to Boolean concept learning, unifying a variety of different techniques. For example, we give the first polynomial-time algorithm for learning intersections of halfspaces with a margin (distribution-free) and the first generalization of DNF learning to the setting of probabilistic concepts (queries; uniform distribution). Finally, we give the first provably correct algorithms for common schemes in multiple-instance learning.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Giving provably efficient algorithms for learning neural networks is a fundamental challenge in the theory of machine learning. Most work in computational learning theory has led to negative results showing that– from a worst-case perspective– even learning the simplest architectures seems computationally intractable

[LSSS14, SVWX17]. For example, there are known hardness results for agnostically learning a single ReLU (learning a ReLU in the non-realizable setting) [GKKT16].

As such, much work has focused on finding algorithms that succeed after making various restrictive assumptions on both the network’s architecture and the underlying marginal distribution. Recent work gives evidence that for gradient-based algorithms these types of assumptions are actually necessary [Sha16]. In this paper, we focus on understanding the frontier of efficient neural network learning: what is the most expressive class of neural networks that can be learned, provably, in polynomial-time without taking any additional assumptions?

1.1 Our Results

We give a simple, iterative algorithm that efficiently learns neural networks with one layer of sigmoids feeding into any smooth, monotone activation function (for example, Sigmoid or ReLU). Both the first hidden layer of sigmoids and the output activation function have corresponding hidden weight vectors. The algorithm succeeds with respect to any distribution on the unit ball in dimensions. The network can have an arbitrary feedforward structure, and we assume nothing about these weight vectors other than that they each have -norm at most one in the first layer (the weight vector in the second layer may have polynomially large norm). These networks, even over the unit ball, have polynomially large VC dimension (if the first layer has hidden units, the VC dimension will be [LBW94]).

This is the first provably efficient, assumption-free result for learning neural networks with more than one nonlinear layer; prior work due to Goel et al. [GKKT16] can learn a sum of one hidden layer of sigmoids. While our result “only” handles one additional nonlinear output layer, we stress that 1) the recent (large) literature for learning even one nonlinear layer often requires many assumptions (e.g., Gaussian marginals) and 2) this additional layer allows us to give broad generalizations of many well-known results in computational learning theory.

Our algorithm, which we call Alphatron, combines the expressive power of kernel methods with an additive update rule inspired by work from isotonic regression. Alphatron also outputs a hypothesis that gives efficient oracle access to interpretable features. That is, if the output activation function is , Alphatron constructs a hypothesis of the form where is an implicit encoding of products of features from the instance space, and yields an efficient algorithm for random access to the coefficients of these products.

More specifically, we obtain the following new supervised learning results:

  • Let be any feedforward neural network with one hidden layer of sigmoids of size feeding into any activation function that is monotone and -Lipschitz. Given independent draws from with , we obtain an efficiently computable hypothesis such that with running time and sample complexity

    (the algorithm succeeds with high probability). Note that the related (but incomparable) problem of distribution-free PAC learning intersections of halfspaces is cryptographically hard

    [KS09b].

  • With an appropriate choice of kernel function, we show that Alphatron can learn more general, real-valued versions of well-studied Boolean concept classes in the probabilistic concept model due to Kearns and Schapire. We subsume and improve known algorithms for uniform distribution learning of DNF formulas (queries), majorities of halfspaces, majorities of circuits, and submodular functions, among others. We achieve the first non-i.i.d. noise-tolerant algorithms333Previously these classes were known to be learnable in the presence of classification noise, where each is label is flipped independently with some fixed probability. for learning these classes444Non-iid/agnostic noise tolerance was known for majorities of halfspaces only for , where is the number of halfspace [KKMS08].. Our technical contributions include

    • Extending the KM algorithm for finding large Fourier coefficients [KM93] to the setting of probabilistic concepts. For the uniform distribution on the hypercube, we can combine the KM algorithm’s sparse approximations with a projection operator to learn smooth, monotone combinations of -bounded functions (it is easy to see that DNF formulas fall into this class). This improves the approach of Gopalan, Kalai, and Klivans [GKK08]

      for agnostically learning decision trees.

    • Generalizing the “low-degree” algorithm due to Linial, Mansour, and Nisan [LMN93] to show that for any circuit class that can be approximated by low-degree Fourier polynomials, we can learn monotone combinations of these circuits “for free” in the probabilistic concept model.

    • Using low-weight (as opposed to just low-degree) polynomial approximators for intersections of halfspaces with a (constant) margin to obtain the first polynomial-time algorithms for learning smooth, monotone combinations (intersection is a special case). The previous best result was a quasipolynomial-time algorithm for PAC learning the special case of ANDs of halfspaces with a (constant) margin [KS08].

We also give the first provably efficient algorithms for nontrivial schemes in multiple instance learning (MIL). Fix an MIL scheme where a learner is given a set of instances , and the learner is told only some function of their labels, namely for some unknown concept and monotone combining function . We give the first provably efficient algorithms for correctly labeling future bags even if the instances within each bag are not identically distributed. Our algorithms hold if the underlying concept is sigmoidal or a halfspace with a margin. If the combining function averages label values (a common case), we obtain bounds that are independent of the bag size.

We learn specifically with respect to square loss, though this will imply polynomial-time learnability for most commonly studied loss functions. When the label

is a deterministic Boolean function of , it is easy to see that small square loss will imply small loss.

1.2 Our Approach

The high-level approach is to use algorithms for isotonic regression to learn monotone combinations of functions approximated by elements of a suitable RKHS. Our starting point is the Isotron algorithm, due to Kalai and Sastry [KS09a], and a refinement due to Kakade, Kalai, Kanade and Shamir [KKKS11] called the GLMtron. These algorithms efficiently learn any generalized linear model (GLM): distributions on instance-label pairs where the conditional mean of given x is equal to for some (known) smooth, non-decreasing function and unknown weight vector w. Their algorithms are simple and use an iterative update rule to minimize square-loss, a non-convex optimization problem in this setting. Both of their papers remark that their algorithms can be kernelized, but no concrete applications are given.

Around the same time, Shalev-Shwartz, Shamir, and Sridharan [SSSS11] used kernel methods and general solvers for convex programs to give algorithms for learning a halfspace under a distributional assumption corresponding to a margin in the non-realizable setting (agnostic learning). Their kernel was composed by Zhang et al. [ZLJ16] to obtain results for learning sparse neural networks with certain smooth activations, and Goel et al. [GKKT16] used a similar approach in conjunction with general tools from approximation theory to obtain learning results for a large class of nonlinear activations including ReLU and Sigmoid.

Combining the above approaches, though not technically deep, is subtle and depends heavily on the choice of model. For example, prior work on kernel methods for learning neural networks has focused almost exclusively on learning in the agnostic model. This model is too challenging, in the sense that the associated optimization problems to be solved seem computationally intractable (even for a single ReLU). The probabilistic concept model, on the other hand, is a more structured noise model and allows for an iterative approach to minimize the empirical loss.

Our algorithm– Alphatron– inherits the best properties of both kernel methods and gradient-based methods: it is a simple, iterative update rule that does not require regularization555

We emphasize this to distinguish our algorithm from the usual kernel methods (e.g., kernel ridge regression and SVMs) where regularization and the representer theorem are key steps.

, and it learns broad classes of networks whose first layer can be approximated via an appropriate feature expansion into an RKHS.

One technical challenge is handling the approximation error induced from embedding into an RKHS. In some sense, we must learn a noisy GLM. For this, we use a learning rate and a slack variable to account for noise and follow the outline of the analysis of GLMtron (or Isotron). The resulting algorithm is similar to performing gradient descent on the support vectors of a target element in an RKHS. Our convergence bounds depend on the resulting choice of kernel, learning rate, and quality of RKHS embedding. We can then leverage several results from approximation theory and obtain general theorems for various notions of RKHS approximation.

1.3 Related Work

The literature on provably efficient algorithms for learning neural networks is extensive. In this work we focus on common nonlinear activation functions: sigmoid, ReLU, or threshold. For linear activations, neural networks compute an overall function that is linear and can be learned efficiently using any polynomial-time algorithm for solving linear regression. Livni et al.

[LSSS14] observed that neural networks of constant depth with constant degree polynomial activations are equivalent to linear functions in a higher dimensional space (polynomials of degree are equivalent to linear functions over monomials). It is known, however, that any polynomial that computes or even -approximates a single ReLU requires degree [GKKT16]. Thus, linear methods alone do not suffice for obtaining our results.

The vast majority of work on learning neural networks takes strong assumptions on either the underlying marginal distribution (e.g., Gaussian), the structure of the network, or both. Works that fall into these categories include [KOS04, KM13, JSA15, SA14, ZPS17, ZLJ16, ZSJ17, GK17]. In terms of assumption-free learning results, Goel et al. [GKKT16] used kernel methods to give an efficient, agnostic learning algorithm for sums of sigmoids (i.e., one hidden layer of sigmoids) with respect to any distribution on the unit ball. Daniely [Dan17] used kernel methods in combination with gradient descent to learn neural networks, but the networks he considers have restricted VC dimension. All of the problems we consider in this paper are non-convex optimization problems, as it is known that a single sigmoid with respect to square-loss has exponentially many bad local minima [AHW96].

A Remark on Bounding the 2-Norm. As mentioned earlier, the networks we learn, even over the unit ball, have polynomially large VC dimension (if the first layer has hidden units, the VC dimension will be ). It is easy to see that if we allow the -norm of weight vectors in the first layer to be polynomially large (in the dimension), we arrive at a learning problem statistically close to PAC learning intersections of halfspaces, for which there are known cryptographic hardness results [KS09b]. Further, in the agnostic model, learning even a single ReLU with a bounded norm weight vector (and any distribution on the unit sphere) is as hard as learning sparse parity with noise [GKKT16]. As such, for distribution-free learnability, it seems necessary to have some bound on the norm and

some structure in the noise model. Bounding the norm of the weight vectors also aligns nicely with practical tools for learning neural networks. Most gradient-based training algorithms for learning deep nets initialize hidden weight vectors to have unit norm and use techniques such as batch normalization or regularization to prevent the norm of the weight vectors from becoming large.

1.4 Notation

Vectors are denoted by bold-face and denotes the standard 2-norm of the vector. We denote the space of inputs by and the space of outputs by . In our paper, is usually the unit sphere/ball and is or . Standard scalar (dot) products are denoted by for vectors , while inner products in a Reproducing Kernel Hilbert Space (RKHS) are denoted by for elements in the RKHS. We denote the standard composition of functions and by .

Note. Due to space limitations, we defer most proofs to the appendix.

2 The Alphatron Algorithm

Here we present our main algorithm Alphatron (Algorithm 1) and a proof of its correctness. In the next section we will use this algorithm to obtain our most general learning results.

Input : data , non-decreasing666We present the algorithm and subsequent results for non-decreasing function . Non-increasing functions can also be handled by negating the update term, i.e., .-Lipschitz function , kernel function corresponding to feature map , learning rate , number of iterations , held-out data of size
1
2 for  do
3       for  do
4            
5       end for
6      
7 end for
Output :  where
Algorithm 1 Alphatron

Define implying . Let and . It is easy to see that . Let be the empirical versions of the same.

The following theorem generalizes Theorem 1 of [KKKS11] to the bounded noise setting in a high dimensional feature space. We follow the same outline, and their theorem can be recovered by setting and as the zero function.

Theorem 1.

Let be a kernel function corresponding to feature map such that . Consider samples drawn iid from distribution on such that where is a known -Lipschitz non-decreasing function, for such that and . Then for , with probability , Alphatron with and for large enough constants outputs a hypothesis such that,

Alphatron runs in time where is the time required to compute the kernel function .

2.1 General Theorems Involving Alphatron

In this section we use Alphatron to give our most general learnability results for the probablistic concept (p-concept) model. We then state several applications in the next section. Here we show that if a function can be approximated by an element of an appropriate RKHS, then it is p-concept learnable. We assume that the kernel function is efficiently computable, that is, computable in polynomial time in the input dimension. Formally, we define approximation as follows:

Definition 1 (-approximation).

Let be a function mapping domain to and be a distribution over . Let be a kernel function with corresponding RKHS and feature vector . We say is -approximated by over if there exists some with such that for all and .

Combining Alphatron and the above approximation guarantees, we have the following general learning results:

Theorem 2.

Consider distribution on such that where is a known -Lipschitz non-decreasing function and is -approximated over by some kernel function such that . Then for , there exists an algorithm that draws iid samples from and outputs a hypothesis such that with probability , for in time where is the dimension of .

Proof.

Let be the RKHS corresponding to and be the feature vector. Since is -approximated by kernel function over , we have for some function with . Thus . Applying Theorem 1, we have that Alphatron outputs a hypothesis such that

for some constants . Also Alphatron requires at most iterations. Setting as in theorem statement gives us the required result. ∎

For the simpler case when is uniformly approximated by elements in the RKHS we have,

Definition 2 (-uniform approximation).

Let be a function mapping domain to and be a distribution over . Let be a kernel function with corresponding RKHS and feature vector . We say is -uniformly approximated by over if there exists some with such that for all .

Theorem 3.

Consider distribution on such that where is a known -Lipschitz non-decreasing function and is -approximated by some kernel function such that . Then for , there exists an algorithm that draws iid samples from and outputs a hypothesis such that with probability , for in time where is the dimension of .

Proof.

The proof is the same as the proof of Theorem 2 by re-examining the proof of Theorem 1 and noticing that is uniformly bounded by in each inequality. ∎

3 Learning Neural Networks

In this section we give polynomial time learnability results for neural networks with two nonlinear layers in the p-concept model. Following Safran and Shamir [SS16], we define a neural network with one (nonlinear) layer with units as follows:

for , for , . We subsequently define a neural network with two (nonlinear) layers with one unit in layer 2 and units in hidden layer 1 as follows:

for , for , and .

The following theorem is our main result for learning classes of neural networks with two nonlinear layers in polynomial time:

Theorem 4.

Consider samples drawn iid from distribution on such that with is a known -Lipschitz non-decreasing function and

is the sigmoid function. There exists an algorithm that outputs a hypothesis

such that, with probability ,

for . The algorithm runs in time polynomial in and .

We also obtain results for networks of ReLUs, but the dependence on the number of hidden units, , and are exponential (the algorithm still runs in polynomial-time in the dimension):

Theorem 5.

Consider samples drawn iid from distribution on such that with is a known -Lipschitz non-decreasing function and is the ReLU function. There exists an algorithm that outputs a hypothesis such that with probability ,

for . The algorithm runs in time polynomial in and .

Although our algorithm does not recover the parameters of the network, it still outputs a hypothesis with interpretable features. More specifically, our learning algorithm outputs the hidden layer as a multivariate polynomial. Given inputs , the hypothesis output by our algorithm Alphatron is of the form where and is dependent on required approximation. As seen in the preliminaries, can be expressed as a polynomial and the coefficients can be computed as follows,

Here, we follow the notation from [GKKT16]; maps ordered tuple for to tuple such that and maps ordered tuple to the number of distinct orderings of the ’s for . The function can be computed from the multinomial theorem (cf. [Wik16]). Thus, the coefficients of the polynomial can be efficiently indexed. Informally, each coefficient can be interpreted as the correlation between the target function and the product of features appearing in the coefficient’s monomial.

4 Generalizing PAC Learning to Probabilistic Concepts

In this section we show how known algorithms for PAC learning boolean concepts can be generalized to the probabilistic concept model. We use Alphatron to learn real-valued versions of these well-studied concepts.

Notation. We follow the notation of [GKK08]. For any function , we denote the Fourier coefficients by for all . The support of , i.e., the number of non-zero Fourier coefficients, is denoted by . The norms of the coefficient vectors are defined as for and . Similarly, the norm of the function are defined as for . Also, the inner product .

4.1 Generalized DNF Learning with Queries

Here we give an algorithm, KMtron, which combines isotonic regression with the KM algorithm [KM93]

for finding large Fourier coefficients of a function (given query access to the function). The KM algorithm takes the place of the “kernel trick” used by Alphatron to provide an estimate for the update step in isotonic regression. Viewed this way, the KM algorithm can be re-interpreted as a query-algorithm for giving estimates of the gradient of square-loss with respect to the uniform distribution on Boolean inputs.

The main application of KMtron is a generalization of celebrated results for PAC learning DNF formulas [Jac97] to the setting of probabilistic concepts. That is, we can efficiently learn any conditional mean that is a smooth, monotone combination of -bounded functions.

KM Algorithm. The KM algorithm learns sparse approximations to boolean functions given query access to the underlying function. The following lemmas about the KM algorithm are important to our analysis.

Lemma 1 ([Km93]).

Given an oracle for , returns with and . The running time is .

Lemma 2 ([Km93]).

If has , then returns s.t. .

Projection Operator. The projection operator for maps to the closest in convex set , i.e., . [GKK08] show that is simple and easy to compute for sparse polynomials. We use the following lemmas by [GKK08] about the projection operator in our analysis.

Lemma 3 ([Gkk08]).

Let be such that . Then .

Lemma 4 ([Gkk08]).

Let be such that . Then .

KMtron. The algorithm KMtron is as follows:

Input : Function non-decreasing and -Lipschitz, query access to for some function , learning rate , number of iterations , error parameter
1
2 for  do
3      
4      
5 end for
Output : Return where is the best over
Algorithm 2 KMtron

To efficiently run KMtron, we require efficient query access to . Since is stored as a sparse polynomial, and we are given query access for , we can efficiently compute for any . We can extend the algorithm to handle distribution queries (p-concept), i.e., for any of our choosing we obtain a sample of where . [GKK08] (c.f. Appendix A.1) observed that using distribution queries instead of function queries to the conditional mean is equivalent as long as the number of queries is polynomial.

The following theorem proves the correctness of KMtron.

Theorem 6.

For any non-decreasing -Lipschitz and function such that , given query access to , KMtron run with and for sufficiently small constant and outputs such that . The runtime of KMtron is .

Corollary 1.

Let be such that for . If we have query access to for all such that for non-decreasing -Lipschitz , then using the above, we can learn the conditional mean function in time with respect to the uniform distribution on .

Observe that the complexity bounds are independent of the number of terms. This follows from the fact that . This leads to the following new learning result for DNF formulas: fix a DNF and let denote the fraction of terms of satisfied by . Fix monotone, -Lipschitz function . For uniformly chosen input , label is equal to with probability . Then in time polynomial in , , and , KMtron outputs a hypothesis such that (recall ). Note that the running time has no dependence on the number of terms.

As an easy corollary, we also obtain a simple (no Boosting required) polynomial time query-algorithm for learning DNFs under the uniform distribution777Feldman [Fel12] was the first to obtain a query-algorithm for PAC learning DNF formulas with respect to the uniform distribution that did not require a Boosting algorithm.:

Corollary 2.

Let be a DNF formula from with terms. Then is PAC learnable under the uniform distribution using membership queries in time .

4.2 Extending the “Low-Degree” Algorithm

Here we show that Alphatron can be used to learn any smooth, monotone combination of function classes that are approximated by low-degree polynomials (our other results require us to take advantage of low-weight approximations).

Definition 3.

For a class of functions , let denote monotone function applied to a linear combination of (polynomially many) functions from .

For the domain of and degree parameter , our algorithm will incur a sample complexity and running time factor of , so the “kernel trick” is not necessary (we can work explicitly in the feature space). The main point is that using isotonic regression (as opposed to the original “low-degree” algorithm due to Linial, Mansour and Nisan [LMN93]), we can learn for any smooth, monotone and class that has low-degree Fourier approximations (we also obtain non-i.i.d. noise tolerance for these classes due to the definition of the probabilistic concept model). While isotonic regression has the flavor of a boosting algorithm, we do not need to change the underlying distribution on points or add noise to the labels, as all boosting algorithms do.

Definition 4.

-Fourier concentration A function is said to be -Fourier concentrated if where for all are the discrete Fourier coefficients of .

Theorem 7.

Consider distribution on whose marginal is uniform on and for some known non-decreasing -Lipschitz and . If is -Fourier concentrated then there exists an algorithm that draws iid samples from and outputs hypothesis such that with probability , for in time .

The above can be generalized to linear combinations of Fourier concentrated functions using the following lemma.

Lemma 5.

Let where and for all . If for all , is -Fourier concentrated, then is -Fourier concentrated for and .

Many concept classes are known to be approximated by low-degree Fourier polynomials. Combining Theorem 7 and Lemma 5, we immediately obtain the following learning results in the probabilistic concept model whose running time matches or improves their best-known PAC counterparts:

  • ), generalizing majorities of constant depth circuits [JKS02].

  • ), generalizing majorities of linear threshold functions [KKMS08].

  • , generalizing submodular functions [CKKL12].

As a further application, we can learn majorities of halfspaces with respect to the uniform distribution in time for any (choose with each entry and let

smoothly interpolate majority with Lipschitz constant

). This improves on the best known bound of [KKMS08]888Recent work due to Kane [Kan14] does not apply to majorities of halfspaces, only intersections..

Using the fact that the function has a uniform approximator of degree [Pat92], we immediately obtain a time algorithm for distribution-free learning of in the probabilistic concept model (this class includes the set of all polynomial-size DNF formulas). The problem of generalizing the -time algorithm for distribution-free PAC learning of DNF formulas due to Klivans and Servedio [KS04] remains open.

4.3 Learning Monotone Functions of Halfspaces with a Margin as Probabilistic Concepts

In this section we consider the problem of learning a smooth combining function of halfspaces with a margin . We assume that all examples lie on the unit ball and that for each weight vector , . For simplicity we also assume each halfspace is origin-centered, i.e. (though our techniques easily handle the case of nonzero ).

Theorem 8.

Consider samples drawn iid from distribution on such that where is a -Lipschitz non-decreasing function, are origin-centered halfspaces with margin on and . There exists an algorithm that outputs a hypothesis such that with probability ,

for . The algorithm runs in time polynomial in and .

Remark. If , that is, a function of the fraction of true halfspaces, then the run-time is independent of the number of halfspaces . This holds since in this case.

We now show that Theorem 8 immediately implies the first polynomial-time algorithm for PAC learning intersections of halfspaces with a (constant) margin. Consider -halfspaces . An intersection of these -halfspaces is given by .

Corollary 3.

There exists an algorithm that PAC learns any intersection of -halfspaces with margin on in time for some constant .

This result improves the previous best bound due to Klivans and Servedio [KS08] that had (for constant ) a quasipolynomial dependence on the number of halfspaces .

Klivans and Servedio used random projection along with kernel perceptron and the complete quadratic kernel to obtain their results. Here we directly use the multinomial kernel, which takes advantage of how the polynomial approximator’s weights can be embedded into the corresponding RKHS. We remark that if we are only interested in the special case of PAC learning an intersection of halfspaces with a margin (as opposed to learning in the probabilistic concept model), we can use kernel perceptron along with the multinomial kernel (and a Chebsyshev approximation that will result in an improved

dependence), as opposed to Alphatron in conjunction with the multinomial kernel.

5 Multiple Instance Learning

Multiple Instance Learning (MIL) is a generalization of supervised classification in which a label is assigned to a bag, that is, a set of instances, instead of an individual instance [DLLP97]. The bag label is induced by the labels of the instances in it. The goal we focus on in this work is to label future bags of instances correctly, with high probability. We refer the reader to [Amo13, HVB16] for an in-depth study of MIL. In this section we apply the previously developed ideas to MIL and give the first provable learning results for concrete schemes that do not rely on unproven assumptions.

Comparison to Previous Work. Under the standard MI assumption, various results are known in the PAC learning setting. Blum and Kalai [BK98] showed a simple reduction from PAC learning MIL to PAC learning with one-sided noise under the assumption that the instances in each bag were drawn independently from a distribution. Sabato and Tishby [ST12] removed the independence assumption and gave sample complexity bounds for learning future bags. All the above results require the existence of an algorithm for PAC learning with one-sided noise, which is itself a challenging problem and not known to exist for even simple concept classes.

In this work, we do not assume instances within each bag are independently distributed, and we do not require the existence of PAC learning algorithms for one-sided noise. Instead, we give efficient algorithms for labeling future bags when the class labeling instances is an unknown halfspace with a margin or an unknown depth-two neural network. We succeed with respect to general monotone, smooth combining functions.

Notation. Let us denote the space of instances as and the space of bags as . Let be an upper bound on the size of the bags, that is, . Let the instance labeling function be and the bag labeling function be . We assume a distribution over the bags and allow the instances within the bag to be dependent on each other. We consider two variants of the relationship between the instance and bag labeling functions and corresponding learning models.

5.1 Probabilistic MIL

We generalize the deterministic model to allow the labeling function to induce a probability distribution over the labels. This assumption seems more intuitive and less restrictive than the deterministic case as it allows for noise in the labels.

Definition 5 (Probabilistic MI Assumption).

Given combining function , for bag ,

is a random variable such that

where is the instance labeling function.

Definition 6 (Probabilistic MIL).

The concept class is -Probabilistic MIL for with sample complexity and running time if under the probabilistic MI assumption for , there exists an algorithm such that for all as the instance labeling function and any distribution on , draws at most iid bags and runs in time at most to return a bag-labeling hypothesis such that with probability ,

The following is our main theorem of learning in the Probabilistic MIL setting.

Theorem 9.

The concept class is -Probabilistic MIL for monotone -Lipschitz with sample complexity and running time if all are -uniformly approximated by some kernel for large enough constant .

Combining Theorem 9 with learnability Theorems 4 and 8 we can show the following polynomial time Probabilistic MIL results.

Corollary 4.

For any monotone -Lipschitz function , the concept class of sigmoids over are -Probabilistic MIL with sample complexity and running time .

Corollary 5.

For any monotone -Lipschitz function , the concept class of halfspaces with a constant margin over are -Probabilistic MIL with sample complexity and running time .

References

  • [AHW96] Peter Auer, Mark Herbster, and Manfred K. Warmuth.

    Exponentially many local minima for single neurons.

    In Advances in Neural Information Processing Systems, volume 8, pages 316–322. The MIT Press, 1996.
  • [Amo13] Jaume Amores. Multiple instance classification: Review, taxonomy and comparative study. Artificial Intelligence, 201:81–105, 2013.
  • [BK98] Avrim Blum and Adam Kalai. A note on learning from multiple-instance examples. Machine Learning, 30(1):23–29, 1998.
  • [BM02] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002.
  • [CKKL12] Mahdi Cheraghchi, Adam R. Klivans, Pravesh Kothari, and Homin K. Lee. Submodular functions are noise stable. In Yuval Rabani, editor, Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1586–1592. SIAM, 2012.
  • [Dan15] Amit Daniely. A ptas for agnostically learning halfspaces. In Conference on Learning Theory, pages 484–502, 2015.
  • [Dan17] Amit Daniely. Sgd learns the conjugate kernel class of the network. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, NIPS, pages 2419–2427, 2017.
  • [DLLP97] Thomas G Dietterich, Richard H Lathrop, and Tomás Lozano-Pérez. Solving the multiple instance problem with axis-parallel rectangles. Artificial intelligence, 89(1):31–71, 1997.
  • [Fel12] Vitaly Feldman. Learning dnf expressions from fourier spectrum. In Shie Mannor, Nathan Srebro, and Robert C. Williamson, editors, COLT 2012 - The 25th Annual Conference on Learning Theory, June 25-27, 2012, Edinburgh, Scotland, volume 23 of JMLR Proceedings, pages 17.1–17.19. JMLR.org, 2012.
  • [GK17] Surbhi Goel and Adam Klivans. Eigenvalue decay implies polynomial-time learnability of neural networks. In NIPS, 2017.
  • [GKK08] Parikshit Gopalan, Adam Tauman Kalai, and Adam R Klivans. Agnostically learning decision trees. In

    Proceedings of the fortieth annual ACM symposium on Theory of computing

    , pages 527–536. ACM, 2008.
  • [GKKT16] Surbhi Goel, Varun Kanade, Adam Klivans, and Justin Thaler. Reliably learning the relu in polynomial time. arXiv preprint arXiv:1611.10258, 2016.
  • [HVB16] Francisco Herrera, Sebastián Ventura, Rafael Bello, Chris Cornelis, Amelia Zafra, Dánel Sánchez-Tarragó, and Sarah Vluymans. Multiple instance learning. In Multiple Instance Learning, pages 17–33. Springer, 2016.
  • [Jac97] Jeffrey C. Jackson. An efficient membership-query algorithm for learning dnf with respect to the uniform distribution. J. Comput. Syst. Sci, 55(3):414–440, 1997.
  • [JKS02] Jeffrey C. Jackson, Adam R. Klivans, and Rocco A. Servedio. Learnability beyond AC0̂. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing (STOC-02), pages 776–784, New York, May 19–21 2002. ACM Press.
  • [JSA15] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Beating the perils of non-convexity: Guaranteed training of neural networks using tensor methods. arXiv preprint arXiv:1506.08473, 2015.
  • [Kan14] Daniel M. Kane. The average sensitivity of an intersection of half spaces. In David B. Shmoys, editor, Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 437–440. ACM, 2014.
  • [KKKS11] Sham M. Kakade, Adam Kalai, Varun Kanade, and Ohad Shamir. Efficient learning of generalized linear and single index models with isotonic regression. In NIPS, pages 927–935, 2011.
  • [KKMS08] Adam Tauman Kalai, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. Agnostically learning halfspaces. SIAM J. Comput., 37(6):1777–1805, 2008.
  • [KM93] Eyal Kushilevitz and Yishay Mansour. Learning decision trees using the fourier spectrum. SIAM Journal on Computing, 22(6):1331–1348, 1993.
  • [KM13] Adam R. Klivans and Raghu Meka. Moment-matching polynomials. Electronic Colloquium on Computational Complexity (ECCC), 20:8, 2013.
  • [KOS04] A. Klivans, R. O’Donnell, and R. Servedio. Learning intersections and thresholds of halfspaces. JCSS: Journal of Computer and System Sciences, 68, 2004.
  • [KS90] Michael J Kearns and Robert E Schapire. Efficient distribution-free learning of probabilistic concepts. In Foundations of Computer Science, 1990. Proceedings., 31st Annual Symposium on, pages 382–391. IEEE, 1990.
  • [KS04] A. Klivans and R. Servedio. Learning DNF in time . JCSS: Journal of Computer and System Sciences, 68, 2004.
  • [KS08] Adam R Klivans and Rocco A Servedio. Learning intersections of halfspaces with a margin. Journal of Computer and System Sciences, 74(1):35–48, 2008.
  • [KS09a] Adam Kalai and Ravi Sastry. The isotron algorithm: High-dimensional isotonic regression. In COLT, 2009.
  • [KS09b] Adam R. Klivans and Alexander A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. J. Comput. Syst. Sci, 75(1):2–12, 2009.
  • [KST09] Sham M Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Advances in neural information processing systems, pages 793–800, 2009.
  • [LBW94] Lee, Bartlett, and Williamson. Lower bounds on the VC-dimension of smoothly parametrized function classes. In COLT: Proceedings of the Workshop on Computational Learning Theory, Morgan Kaufmann Publishers, 1994.
  • [LMN93] Linial, Mansour, and Nisan.

    Constant depth circuits, fourier transform, and learnability.

    JACM: Journal of the ACM, 40, 1993.
  • [LSSS14] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, pages 855–863, 2014.
  • [LT91] Michel Ledoux and Michel Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991.
  • [Pat92] Ramamohan Paturi. On the degree of polynomials that approximate symmetric Boolean functions (preliminary version). In Proceedings of the Twenty-Fourth Annual ACM Symposium on the Theory of Computing, pages 468–474, Victoria, British Columbia, Canada, 4–6 May 1992.
  • [SA14] Hanie Sedghi and Anima Anandkumar. Provable methods for training neural networks with sparse connectivity. arXiv preprint arXiv:1412.2693, 2014.
  • [SGSS07] Alex Smola, Arthur Gretton, Le Song, and Bernhard Schölkopf. A hilbert space embedding for distributions. In International Conference on Algorithmic Learning Theory, pages 13–31. Springer, 2007.
  • [Sha16] Ohad Shamir. Distribution-specific hardness of learning neural networks. arXiv preprint arXiv:1609.01037, 2016.
  • [She12] Alexander A Sherstov. Making polynomials robust to noise. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 747–758. ACM, 2012.
  • [SS02] Bernhard Schölkopf and Alexander J Smola.

    Learning with kernels: support vector machines, regularization, optimization, and beyond

    .
    MIT press, 2002.
  • [SS16] Itay Safran and Ohad Shamir. Depth separation in relu networks for approximating smooth non-linear functions. CoRR, abs/1610.09887, 2016.
  • [SSSS11] Shai Shalev-Shwartz, Ohad Shamir, and Karthik Sridharan. Learning kernel-based halfspaces with the 0-1 loss. SIAM J. Comput., 40(6):1623–1646, 2011.
  • [ST12] Sivan Sabato and Naftali Tishby. Multi-instance learning with any hypothesis class. Journal of Machine Learning Research, 13(Oct):2999–3039, 2012.
  • [SVWX17] Le Song, Santosh Vempala, John Wilmes, and Bo Xie. On the complexity of learning neural networks. arXiv preprint arXiv:1707.04615, 2017.
  • [Val84] Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984.
  • [Wik16] Wikipedia. Multinomial theorem — Wikipedia, the free encyclopedia, 2016. URL: https://en.wikipedia.org/wiki/Multinomial_theorem.
  • [ZLJ16] Yuchen Zhang, Jason Lee, and Michael Jordan. networks are improperly learnable in polynomial-time. In ICML, 2016.
  • [ZPS17] Qiuyi Zhang, Rina Panigrahy, and Sushant Sachdeva. Electron-proton dynamics in deep learning. CoRR, abs/1702.00458, 2017.
  • [ZSJ17] Kai Zhong, Zhao Song, Prateek Jain, Peter L. Bartlett, and Inderjit S. Dhillon. Recovery guarantees for one-hidden-layer neural networks. In ICML, volume 70, pages 4140–4149. JMLR.org, 2017.

Appendix A Background

a.1 Learning Models

We consider two learning models in our paper, the standard Probably Approximately Correct (PAC) learning model and a relaxation of the standard model, the Probabilistic Concept (p-concept) learning model. For completeness, we define the two models and refer the reader to [Val84, KS90] for a detailed explanation.

Definition 7 (PAC Learning [Val84]).

We say that a concept class is Probably Approximately Correct (PAC) learnable, if there exists an algorithm such that for every , and over , if is given access to examples drawn from and labeled according to , outputs a hypothesis , such that with probability at least ,

(1)

Furthermore, we say that is efficiently PAC learnable to error if can output an satisfying the above with running time and sample complexity polynomial in , , and .

Definition 8 (p-concept Learning [Ks90]).

We say that a concept class is Probabilistic Concept (p-concept) learnable, if there exists an algorithm such that for every , and distribution over with we have that , given access to examples drawn from , outputs a hypothesis , such that with probability at least ,

(2)

Furthermore, we say that is efficiently p-concept learnable to error if can output an satisfying the above with running time and sample complexity polynomial in , , and .

Here we focus on square loss for p-concept since an efficient algorithm for square-loss implies efficient algorithms of various other standard losses.

a.2 Generalization Bounds

The following standard generalization bound based on Rademacher complexity is useful for our analysis. For a background on Rademacher complexity, we refer the readers to [BM02].

Theorem 10 ([Bm02]).

Let be a distribution over and let (where ) be a -bounded loss function that is -Lipschitz in its first argument. Let and for any , let and , where . Then for any , with probability at least (over the random sample draw for ), simultaneously for all , the following is true:

where is the Rademacher complexity of the function class .

For a linear concept class, the Rademacher complexity can be bounded as follows.

Theorem 11 ([Kst09]).

Let be a subset of a Hilbert space equipped with inner product such that for each , , and let be a class of linear functions. Then it holds that

The following result is useful for bounding the Rademacher complexity of a smooth function of a concept class.

Theorem 12 ([Bm02, Lt91]).

Let be -Lipschitz and suppose that . Let , and for a function . Finally, for , let . It holds that .

a.3 Kernel Methods

We assume the reader has a basic working knowledge of kernel methods (for a good resource on kernel methods in machine learning we refer the reader to [SS02]). We denote a kernel function by where is the associated feature map and is the corresponding reproducing kernel Hilbert space (RKHS).

Here we define two kernels and a few of their properties that we will use for our analysis. First, we define a variant of the polynomial kernel, the multinomial kernel due to Goel et al. [GKKT16]:

Definition 9 (Multinomial Kernel [Gkkt16]).

Define , where , indexed by tuples